[1582] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

Re: Speeding up DH

daemon@ATHENA.MIT.EDU (Colin Plumb)
Tue Sep 23 11:38:11 1997

Date: Tue, 23 Sep 1997 00:24:34 -0600 (MDT)
From: Colin Plumb <colin@nyx.net>
To: frantz@netcom.com, karn@qualcomm.com
Cc: colin@nyx.net, cryptography@c2.net, stewarts@ix.netcom.com

Phil Karn wrote:
> My approach was to build an "overlaid sieve window" at my random
> starting point big enough to expect to hit at least one strong prime
> in the window. The density of primes in the vicinity of 2^n is known
> to be about 1/n, so I conjectured that the density of strong primes
> would be about 1/(n^2).  (Experimentally, this seems to be the case.)

Theoretically, too.  Actually, it's 1/ln(2^n) and 1/ln(2^n)^2.

> For a 1024-bit strong prime, your sieve "window" should be at least a
> million or two.  Use a bit map to save RAM, and of course you only
> need to represent odd numbers.

Better, you only need to represent p of the form 6*x+1.

> Using all remaining available RAM, I also built an auxiliary sieve to
> generate a complete list of small primes. With this list, I walk
> through the main sieve marking off any entry for which either p or
> (p-1)/2 is divisible by the small prime.

For a small prime d, given the remainder ((p-1)/2) % d, it's easy to
derive p % d, so you only need to do one modulo operation and then get
to use it twice.

But yes, sieve twice, then Fermat test twice.  Sieveing is *so* much faster.

> The importance of agressive sieving cannot be overstated. It's been a
> while since I did this, but I seem to recall that by generating a list
> of small prime factors up to about 5 million and rejecting candidates
> divisible by these factors I only had to run the primality test on
> only about 0.1% of the odd candidates in my search window. It took a
> while to generate the small factors and to do the sieving, but this
> effort was more than paid back by the savings in avoided Fermat
> tests. So make your list of small prime factors as big as you can, up
> to the point where your machine begins paging.

Actually, you can compute the payoff point.  Consider the question
"should I sieve by small prime d, or so a Fermat test?".  Well, if you sieve,
there's a 2/d chance that it'll exclude the number with work S.
And a (d-2)/d chance that it'll do work S + F.
So the total cost is S + F * (d-2)/d.

If you don't sieve, the cost is F.

The only place the density of primes comes in is the amortized sieving
cost, since you sieve once and amortize it over all the Fermat tests.
This makes S very small.

You should sieve up to the point where it's no longer a win,
i.e. where S + F * (d-2)/d = F, or S*d = F*2, or d = 2 * F/S.
Since S is amortized and linear in the size of the prime, it's
pretty small.  Whereas F is cubic.

ftp://skip.incog.com/pub/bnlib1.1.tar.gz has a sample implementation.
I sueved up to 64K because the sieving cost hits a breakpoint there on
some architectures and I didn't bother to implement it efficiently
for larger d, not because it wouldn't still be worth it.
(You still get most of the benefit.  I forget the number, but
I think the sieving lets through only 1 in 340-odd numbers through
to the Fermat test.)
-- 
	-Colin

home help back first fref pref prev next nref lref last post