[4084] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

Re: Intel announcements at RSA '99

daemon@ATHENA.MIT.EDU (David R. Conrad)
Thu Jan 28 13:26:54 1999

Date: Thu, 28 Jan 1999 13:00:20 -0500 (EST)
From: "David R. Conrad" <drc@adni.net>
To: cryptography@c2.net
In-Reply-To: <199901262059.NAA29068@nyx10.nyx.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, 26 Jan 1999, Colin Plumb wrote:

> My basic point is the same as the above: software can whiten the bit
> stream just as easily as hardware, so including any such processing
> in hardware is not a very valuable use of transisitors.  However,
> access to the unwhitened bitstream is essential for quality assurance
> purposes.  Serious users need that to assess the quality of the random
> numbers and, indeed, whether the generator has failed entirely.

Absolutely, but if they feel like burning SHA-1 in silicon then let's not
dissuade them; simply make clear to them that the RNG and SHA-1 must be
independent functions, and not RNG | SHA-1 > registers.

With both, people are free to directly sample the RNG to test it, or to
use their own whitening on it; they can also grab bits from the RNG and
then whiten them with the hardware SHA-1.  The SHA-1 implementation can
be tested separately, of course.  And whether people use the RNG or not,
SHA-1 becomes somewhat-to-considerably more efficient on these chips.

(Although I suppose they'll just implement it in microcode?  How much
could SHA-1 be sped up by having a few special instructions, and a few
(probably not too many) transistors devoted to it?)

> (I'm also curious what people think is a good rate.  I think we surprised
> them by saying that one bit per second was adequate.  Anything more can
> be generated by cryptographic means.)

They can probably provide a lot more than that without any sweat, but even
at 1 bps a /dev/random type of driver (what is that called under Windows?
I know one already exists) would harvest enough entropy over the uptime of
the machine to satisfy all reasonable demands on it.  They certainly
shouldn't go to any trouble to try to get the rate up.

I confess I don't know much about the underlying physics of this thing.
- From what little I do know, it seems to me that limiting the rate wouldn't
produce any higher "quality" bits?  Is that so?

If I'm right, then it seems to me there isn't any particular need to limit
the rate, or to aim for a high rate, and so purely practical concerns
ought to guide that aspect of the design of the circuit.

The only especial concern I can think of is, it needs to be designed to
fail safe, which in this case means, "Fail Utterly".  If the "noisy diode"
or whatever is at the heart of this goes dark, it should start producing a
steady stream of 0's.  Failure has to be obvious, and no attempt should be
made to try to limp along if trouble arises.

David R. Conrad <drc@adni.net>
This is why I love America -- that any kid can dream "I'm going to get
naked with the President" ... and that dream can actually come true.
What a great country!  -- Michael Moore

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 5.0i for non-commercial use
Charset: noconv

iQA/AwUBNrClwoPOYu8Zk+GuEQIceQCgkCTRPgHkQ4AmtLw3l7Y6Usid9ZsAoKzs
WX6IlfC1YcePNIqBREn6mXaM
=3OMw
-----END PGP SIGNATURE-----



home help back first fref pref prev next nref lref last post