[1400] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

Re: Checking for cheaters in distributed challenges

daemon@ATHENA.MIT.EDU (Arnold G. Reinhold)
Mon Sep 1 11:33:56 1997

In-Reply-To: <3.0.3.32.19970831210116.006b6d54@schloss.li>
Date: Mon, 1 Sep 1997 11:18:49 -0400
To: Black Unicorn <unicorn@schloss.li>, Steve Reid <sreid@sea-to-sky.net>
From: "Arnold G. Reinhold" <reinhold@world.std.com>
Cc: cryptography@c2.net

At 9:01 PM -0500 8/31/97, Black Unicorn wrote:
>At 01:09 PM 8/31/97 -0700, you wrote:
>
>But an attacker could still refrain from reporting all near misses, or
>design a process to search only for a few near misses and report a fraction
>of them (both to skew the stats and to hide under the cloak of
>reliability).  Can the number of near misses in a given subset of the
>entire keyspace be determined with any accuracy?  If not, then we still
>have the same problem- i.e. relying on statistics to compare the reported
>near misses of a small fraction of keyspace to the expected average for the
>whole keyspace.  How is the expected average obtained?  We hope, of course,
>that the keyspace is linear enough that a reasonably accurate number on
>near misses can be arrived at by simple permutation math, but
>cryptographically random and statistically random output are not the same
>thing and it is entirely possible (even likely) that some near misses might
>be precluded by the nature of the keyspace, the cipher, or key setup.
>Determining a number for near misses in that context is a difficult
>prospect, and ironically, more difficult with ciphers which have weaker
>keyspace or key setup processes.  Now you're in a position where the
>political side of distributed cryptanalysis (showing the world that a
>cipher is weak) is more difficult for those ciphers which are weak because
>of nonlinear key space.
>
>Statistical anomolies in output for a small keyspace could result in
>ignoring valued submitters who have been saddled with a strange behaving
>block of keyspace.

If a cipher has a key space that is not statistically uniform with respect
to finding near misses, the cracking program can pre-transform the search
keys with, say, a simple LFSR. The added computational load would be minor,
except perhaps in the case of a key schedule that perversely favored
sequential keys. One could then rely on the uniformity of near misses.

If each submittor is required to sign the submittals, even with a
annonymous sig key, then it seems quite practical to verify that a major
submittor is actually doing the search by sampling their submittals. If no
key is ever found, the blocks of minor submittors can be checked, or just
re-issued.

There is still the problem of someone not reporting a found key. A varient
approach might be to keep the plaintext secret and have searchers report
whenever the first n bits of the hash of a trial decryption matched a test
value. All reported values would then be tested (at search central) against
the known plaintext. This would both provide the check on weather searchers
are actually doing the work and prevent someone from withholding the
winning key.

Arnold Reinhold



home help back first fref pref prev next nref lref last post