[2785] in cryptography@c2.net mail archive
Re: AES - draft of request for comments on candidate
daemon@ATHENA.MIT.EDU (Nick Szabo)
Fri May 29 20:19:04 1998
Date: Fri, 29 May 1998 16:22:54 -0700
To: "Arnold G. Reinhold" <reinhold@world.std.com>,
Edward Roback <edward.roback@nist.gov>
From: Nick Szabo <szabo@best.com>
Cc: <bill.stewart@pobox.com>, cryptography@c2.net
In-Reply-To: <v03130300b19493039edd@[24.128.118.45]>
Arnold Reinhold:
>You may want to go further in defining what "indistinguishable from a
>stream of random bits" means. I would expect the tests in FIPS-140 as a
>minimum, along with other widely used randomness tests and a discussion
>of the possibility of special input streams and/or keys that could
>produce statistically non-random output.
Statistical tests are useful primarily for catching implementation bugs,
and for rejecting obviously insecure algorithms. They don't positively
demonstrate the security of the algorithm. Tommorrow somebody
could invent another statistical test that the ciphertext will fail.
Probably many unpublished tests already exist which uncover kinds of
regularity not uncovered by the widely known tests.
Furthermore, the algorithm designer should assume that the cryptanalyst has
more computer time available for running such tests than the designer.
Many of the more sophisticated tests are probabilistic search algorithms
which can uncover more regularity the longer they run.
It's much better to mathematically prove that the algorithm produces output
with a universal property of pseudorandomness, such as passing the next-bit
test, or (equivalently) prove that it passes _all_ possible probabilistic
poly-time statistical tests. This is the formal crypto theory definition
of "indistinguishable from random".
For example, it has been proved that the Blum-Blum-Shub generator passes
all probabilistic poly-time statistical tests, assuming factoring is hard.
Alas, to date provably secure algorithms have tended to be less efficient
than their unprovable cousins. This is a good argument against a "one size
fits all" algorithmic standard. Some users and communities should be free
to use provably secure encryption, while others should be free to use more
doubtful but more efficient algorithms. Both options should be part of any
standard intended for use by a wide variety of communities.
szabo@best.com
http://www.best.com/~szabo/
PGP D4B9 8A17 9B90 BDFF 9699 500F 1068 E27F 6E49 C4A2