RDRAND, etc [ wasRe: Slow zone signing with ECDSA

Timothe Litt litt at acm.org
Thu Apr 20 12:33:38 UTC 2017


On 20-Apr-17 01:26, Paul Kosinski wrote:
> "The tinfoil hat brigade in some distributions has resisted using them,
> fearing some conspiracy to provide not-so-random numbers."
>
> I think the NSA *did*, in fact, compromise the "Dual Elliptic Curve
> Deterministic Random Bit Generator" and paid RSA to make it the default
> in one of their products -- https://en.wikipedia.org/wiki/Dual_EC_DRBG.
>
>
My comment was specifically directed to the RDRAND source for
/dev/*random.  The point is that even if the source were somehow
compromised, it is mixed with other sources using one-way algorithms.  
You can search for the full discussions; it's by no means simple to
envision a scheme where compromising RDRAND as a source to /dev/*random
is effective after the mixing with other sources/whitening.  Merge
entropy from multiple machines (e.g. entropybroker), and it's more
challenging.

On the other hand, if you have legitimate worries about being an
attractive target - or just are in to conspiracy theories:

There are far more straightforward ways to attack hardware.   Get to a
microcode patch mechanism & target password prompts.   Or introduce
floating point math errors in specific computations.  Use
"unpredictable" bits as covert channels.  For higher bandwidth, see the
papers on how forcing cache conflicts can produce a high bandwidth
covert channel.  Harvest data *before* it's encrypted.  Why not attack
the AES acceleration instructions?  Detect use of the compare
instructions in code that tests for randomness and fudge the test
results.  There are papers on introducing hardware monitors that can be
concealed at the foundries.  Use power consumption to send data back
through the power lines - or RF.  (Is your machine room in a double
Faraday cage with interlocking doors?  Or do bits leak out when you open
the door?  What?  No cage?)  Do your DRAMs have data transmitters? 
Could your power supply capacitors hide a microcomputer?  There's a lot
you *can* worry about.

Software is an easier vector - why not duplicate ISC's signing keys and
send you a different version of BIND?  Open source says you CAN read and
inspect all the code that you get for subtle security flaws.  Do you? 
Really?  Or do you just trust the people at ISC?  Breaking ONE key is a
whole lot easier than attacking everyone's encrypted data.

The easiest vector is the oldest - compromise the people.  Snowden,
Manning, etc... And phish - maybe even you.

Then ask, "what's the alternative"?  You can build your own hardware -
if you have the expertise.  After all, those DIY plans for RNGs on the
internet could have been tweaked by your favorite government agency. 
But don't stop with the hardware RNG - build your own CPU from
components that you fabricate yourself.  And memory.  And IO.  And
routers.  And... In your own foundry - using tooling that you developed
(it can be booby-trapped too.)  When you package your chips - you are
making your own packages, right?...  And testing the encapsulant for
nanoprobes?  Better make that too.

You can decide to trust someone else - the USB key or SSL accelerator,
or HSM, or auditors and software vendors or your in-house staff, or ... (

You can hope that the penetration attacks are diverse, and run all your
computations on 15 wildly different platforms and compare the results. 
Not on one system, of course - the comparator/voter could be
compromised.  Pay several people to develop different versions of your
code - remember, it's not just hardware.  Or just build a truly secure
system - the one with no I/O, no power, and no physical access.

If you're a *very* high value target, you may want to exploit all the
countermeasures that you can afford.

You still need to trust someone.  Or someone trusts you.

Most people (and institutions) selectively apply the reasonable
countermeasures that they can reasonably afford, balancing cost against
the threat to them.  Do the basic things like taking care of your
people, audits, modest key lifetimes, secure storage for important keys,
physical access controls - and did I mention - backups?

Given all that - are the CPUs' random sources sufficiently likely to be
compromised that excluding them as one of the inputs to the OSs' entropy
pools is rational?  Even if true, are the weaknesses at the system level
cheap enough to exploit that YOU are a worthwhile target?  (Contrary to
fiction, attackers do not have unlimited budgets/resources.)  How does
that compare to the business lost when your webserver/DNS/email goes
unresponsive due to a lack of entropy?

Considering what I know (and what I know I don't know), I don't put
using RDRAND for an input to /dev/*random very high on my worry list -
and I think that the distributions that do qualify for the overly
worried/"tinfoil hat" moniker.  However, I'm by no means a Pollyanna - I
do worry about what I consider important risks.

Of course,  what worries me may not worry you.  And it's always fun to
dream up theoretical threats. 

By the way, have you seen Mark Andrews recently?  How sure are you that
he hasn't been replaced by aliens?  Or a really good chatbot?   Has he
been Turing tested recently?  (And can we trust the test, er, certificate?)

Timothe Litt
ACM Distinguished Engineer
--------------------------
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20170420/7112e906/attachment.html>


More information about the bind-users mailing list