The Crypto Library Disaster

At the first time someone writes (or contributes to, so this does not apply only to first systems) an application using low level cryptography, (s)he has the choices between two bad and two very bad solutions. This blog explains the reasons of this recurrent disaster and tries to give some idea to avoid it…

The worst possible solution is of course to build a new crypto implementation from scratch: not only it is very long task but there are a lot of critical details which are not handled by newcomers, for instance the generate RSA keys it is not enough to get two prime numbers of the right size (e.g., if you get two consecutive primes, and there are an infinite number of such pairs, the two primes can be incredibly easily recovered from their product). Note this is specified in the relevant NIST document (and similar documents I could see), just it is not enough to have a good big integer library to build a good RSA implementation…

The second worst solution is to use a bad crypto library. Note here I am a bit high church about what I call bad: according to my personal criteria a crypto implementation is bad when it was not designed for a strong security usage, for instance:

  • There is no reasonable cryptographic random generator.
  • There is no FIPS 140-2 certified version of the code (e.g., Botan is bad). I have mixed opinion about implementations which only claim to be FIPS 140-2 ready as the reason they are not candidate to certification is not always enough explicit/clear.

So the choice is between a solution which supports well crypto hardware (i.e., PKCS#11) or a solution which works well in software (i.e., OpenSSL):

  • The software/OpenSSL way:
    (note I refer to OpenSSL because if there are some alternatives in the open source world it will be very likely OpenSSL)  It is not so bad, OpenSSL is aggressively optimised for the common cases (heavily used algorithms (i.e., cryptographic protocols in the cryptographers’ terminology) on current platforms are written in assembly), security bugs are fixed as soon as they are known and covers almost everything one can need. I have more concerns about the not crypto part, in particular the ASN.1 (lack of) parser or delay in years for not crypto bug fixes. But the real problem with OpenSSL and similar software solution is the support of crypto hardware: PKCS#11 engines are buggy and both a nightmare to debug and to use. So this solution becomes less good when crypto hardware is available and bad when crypto hardware must be used (there is a large community  against the use of pure software for security core, the idea is a software cannot be really protected. For instance this argument constraints FIPS 140-2 certified software to the level 1 (over 4) of certification.
  • The hardware/PKCS#11 way:
    (here it is simpler: the only hardware independent generic API is PKCS#11) The idea is to use directly the PKCS#11 application programming interface. This raises two real world issues: first all PKCS#11 providers (a PKCS#11 provider is the piece of software between the application and the hardware providing the PKCS#11 API on the application side) implement only a part of the whole PKCS#11 specification so it is easy to fall into something required by an application which does not supported by a particular Hardware Security Module; second when you have no HSM you need a software one but they were written to help PKCS#11 application debug, not for security by themselves, and it is not the best idea to add a layer of software in the security (and sometimes performance critical) path.

Now what to do? The software solution is a dead end unless one is  sure no HSM will ever be used (there are at least 3 reasons to get crypto hardware one day, first hardware is considered as to be intrinsically more secure so will be required in some environment, second there are things where hardware is better, for instance random number generators (by definition a software random generator is a pseudo-random one) and key store protection, and finally in some usages (and without a security risk analysis to support it) one believes a HSM is an essential part of the security.

So the right thing is to begin with PKCS#11. This adds some constraints (unique initialisation, sessions, separated sign/verify contexts, etc) but IMHO most of these constraints could lead to better code, for instance I believe an unified sign/verify context (vs. different context types for sign or verify) is a bad design: it ignores the difference between a public and a private key. The next step is to make the interface more generic so one can plug any crypto provider, PKCS#11 or a software library (any library as soon as one good is supported). A way to do this is to squeeze the PKCS#11 handling in the SoftHSMv2 implementation, so you finish with a code working with PKCS#11, or the Botan and OpenSSL backends of SoftHSMv2. Another benefit is the code can be improved to accept a FIPS 140-2 certified crypto following the required Guidelines, so can claim to use an embedded FIPS 140-2-validated cryptographic module running per FIPS 140-2 Implementation Guidance section X.Y guidelines.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Protected with IP Blacklist CloudIP Blacklist Cloud

What is 11 + 13 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)