Previous | Table of Contents | Next |
The wrong way to find primes is to generate random numbers and then try to factor them. The right way is to generate random numbers and test if they are prime. There are several probabilistic primality tests; tests that determine whether a number is prime with a given degree of confidence. Assuming this degree of confidence is large enough, these sorts of tests are good enough. Ive heard primes generated in this manner called industrial-grade primes: These are numbers that are probably prime with a controllably small chance of error.
Assume a test is set to fail once in 250 tries. This means that there is a 1 in 1015 chance that the test falsely indicates that a composite number is prime. (The test will never falsely indicate that a prime number is composite.) If for some reason you need more confidence that the number is prime, you can set the failure level even lower. On the other hand, if you consider that the odds of the number being composite are 300 million times less than the odds of winning top prize in a state lottery, you might not worry about it so much.
Overviews of recent developments in the field can be found in [1256, 206]. Other important papers are [1490, 384, 11, 19, 626, 651, 911].
Solovay-Strassen
Robert Solovay and Volker Strassen developed a probabilistic primality testing algorithm [1490]. Their algorithm uses the Jacobi symbol to test if p is prime:
A number a that does not indicate that p is definitely not prime is called a witness. If p is composite, the odds of a random a being a witness is no less than 50 percent. Repeat this test t times, with t different random values for a. The odds of a composite number passing all t tests is no more than one in 2t.
Lehmann
Another, simpler, test was developed independently by Lehmann [945]. Here it tests if p is prime:
Again, the odds of a random a being a witness to p s compositeness is no less than 50 percent. Repeat this test t times. If the calculation equals 1 or -1, but does not always equal 1, then p is probably prime with an error rate of 1 in 2t .
Rabin-Miller
The algorithm everyone usesits easywas developed by Michael Rabin, based in part on Gary Millers ideas [1093, 1284]. Actually, this is a simplified version of the algorithm recommended in the DSS proposal [1149, 1154].
Choose a random number, p, to test. Calculate b, where b is the number of times 2 divides p - 1 (i.e., 2b is the largest power of 2 that divides p - 1). Then calculate m, such that p = 1 + 2b *m.
The odds of a composite passing decreases faster with this test than with previous ones. Three-quarters of the possible values of a are guaranteed to be witnesses. This means that a composite number will slip through t tests no more than ¼t of the time, where t is the number of iterations. Actually, these numbers are very pessimistic. For most random numbers, something like 99.9 percent of the possible a values are witnesses [96].
There are even better estimations [417]. For n- bit candidate primes (where n is more than 100), the odds of error in one test are less than 1 in 4n 2(k/2)(1/2). And for a 256-bit n, the odds of error in six tests are less than 1 in 251 . More theory is in [418].
Practical Considerations
In real-world implementations, prime generation goes quickly.
Another option is not to generate a random p each time, but to incrementally search through numbers starting at a random point until you find a prime.
Step (3) is optional, but it is a good idea. Testing a random odd p to make sure it is not divisible by 3, 5, and 7 eliminates 54 percent of the odd numbers before you get to step (4). Testing against all primes less than 100 eliminates 76 percent of the odd numbers; testing against all primes less than 256 eliminates 80 percent. In general, the fraction of odd candidates that is not a multiple of any prime less than n is 1.12/ln n. The larger the n you test up to, the more precomputation is required before you get to the Rabin-Miller test.
Previous | Table of Contents | Next |