[1158] in cryptography@c2.net mail archive
Re: Better DES challenge update
daemon@ATHENA.MIT.EDU (Steven Bellovin)
Wed Jul 2 23:35:03 1997
To: die@die.com
cc: andreas@artcom.de (Andreas Bogk), eli@gs160.sp.cs.cmu.edu,
cryptography@c2.net, crisp@netcom.com
Date: Wed, 02 Jul 1997 20:21:15 -0400
From: Steven Bellovin <smb@research.att.com>
Not sure what the classic DES case really is. A real world
working DES decryptor would almost certainly see a lot unknown
plaintext or incompletely known plaintext to chew on.
Certainly a major example of this (as others have pointed out)
is otherwise unknown ASCII text with the upper parity bits
either all 0 or all 1 or all even or odd parity on the byte.
But there are other obvious examples, such as less than 8 byte
long strings at known offsets in a message (such as
compression algorithm headers) or partially known strings
with several variable bytes (like time/date headers where the
exact time might not be know due to clock skews). Some of
these specify enough bits so the false hit rate would be very
low, but others - such as the very important ASCII text case -
generate hits as often as once every 256 keys tried.
A real world machine would want the most flexible hit
testing architecture possible.
Let me point to two papers:
ftp://ftp.research.att.com/dist/smb/recog.ps
ftp://ftp.research.att.com/dist/smb/probtxt.ps
The former, by David Wagner and myself, describes a programmable plaintext
recognizer designed to fit on-chip with a Wiener engine. The latter, by
me, analyzes IPSEC for probable plaintext. The hardware feature you want
for the latter is a "population count under mask" -- you XOR the target
word with the trial decryption, then count how many 0 bits are in selected
positions. A variant design described in the paper uses two DES engines
per chip, running off the same key generator but on the corresponding field
from two different encryptions.