[2794] in cryptography@c2.net mail archive
Re: AES - draft of request for comments on candidate algorithms
daemon@ATHENA.MIT.EDU (Arnold G. Reinhold)
Wed Jun 3 10:27:35 1998
In-Reply-To: <3.0.5.32.19980529162254.00806100@shell7.ba.best.com>
Date: Wed, 3 Jun 1998 09:46:02 -0400
To: Nick Szabo <szabo@best.com>, Edward Roback <edward.roback@nist.gov>
From: "Arnold G. Reinhold" <reinhold@world.std.com>
Cc: <bill.stewart@pobox.com>, cryptography@c2.net
At 4:22 PM -0700 5/29/98, Nick Szabo wrote:
>Arnold Reinhold:
>>You may want to go further in defining what "indistinguishable from a
>>stream of random bits" means. I would expect the tests in FIPS-140 as a
>>minimum, along with other widely used randomness tests and a discussion
>>of the possibility of special input streams and/or keys that could
>>produce statistically non-random output.
>
>Statistical tests are useful primarily for catching implementation bugs,
>and for rejecting obviously insecure algorithms. They don't positively
>demonstrate the security of the algorithm. Tommorrow somebody
>could invent another statistical test that the ciphertext will fail.
>Probably many unpublished tests already exist which uncover kinds of
>regularity not uncovered by the widely known tests.
>
>Furthermore, the algorithm designer should assume that the cryptanalyst has
>more computer time available for running such tests than the designer.
>Many of the more sophisticated tests are probabilistic search algorithms
>which can uncover more regularity the longer they run.
>
>It's much better to mathematically prove that the algorithm produces output
>with a universal property of pseudorandomness, such as passing the next-bit
>test, or (equivalently) prove that it passes _all_ possible probabilistic
>poly-time statistical tests. This is the formal crypto theory definition
>of "indistinguishable from random".
>
>For example, it has been proved that the Blum-Blum-Shub generator passes
>all probabilistic poly-time statistical tests, assuming factoring is hard.
>
>Alas, to date provably secure algorithms have tended to be less efficient
>than their unprovable cousins. This is a good argument against a "one size
>fits all" algorithmic standard. Some users and communities should be free
>to use provably secure encryption, while others should be free to use more
>doubtful but more efficient algorithms. Both options should be part of any
>standard intended for use by a wide variety of communities.
>
There are two paths that I am aware of for establishing the security of a
cipher. The first, traditional, method uses general design principles
(confusion, diffusion, etc.), extensive computer testing, and long term
exposure to scrutiny, either by public experts or by an organization such
as NSA with the resources and experience for secret review.
The second method tries to provide a mathematical proof of security. As far
as I know, no one has produced a ciphering algorithm whose security has
been proven absolutely. Much of this research uses complexity theory
arguments which may not have practical value for cryptography. See my "Why
P=?NP Has Nothing To Do With Cryptography",
http://world.std.com/~reinhold/p=np.txt
The best results I am aware of show that breaking a certain algorithm is
equivalent to solving some other mathematical problem that is thought to be
hard, such as factoring in the Blum-Blum-Shub case. But the difficulty of
those problems has never been proven and there are many mathematicians who
believe that further progress is quite likely. A real breakthrough cannot
be ruled out. In addition, these algorithms tend to be much slower than the
traditional ones.
I do not know any rational basis for predicting which is harder: breaking a
well designed traditional cipher or solving problems like factoring. The
deciding factor at the moment is efficiency. For many high speed data
applications, Blum-Blum-Shub and its ilk are orders of magnitude too slow.
There is also an argument that says we should not put all our eggs in one
basket. In applications where symmetric key encryption is suitable, it may
be better not to use a cipher whose only security is equivalence in
strength to the problems on which public key methods are based.
For public key solutions, the use of a traditional cipher for session
encryption appears to compound the risk, since an attacker can compromise
the system either by factoring or by breaking the traditional cipher. But
we can use larger prime products for session key exchanges than we could
afford to use in a Blum-Blum-Shub based cipher. And changing symmetric keys
every session makes attacks on the traditional algorithm more costly.
Altogether, using a traditional approach for the design of AES appears
sensible.
Arnold Reinhold