Modern cryptography algorithms like RSA need primality tests to generate large random primes.
Specifically, RSA uses the product of two prime numbers to generate the public and private keys; if these primes are predictable, it makes it easier for an attacker to break the encryption scheme.
Large random primes are constructed in practice by generating pseudorandom numbers and then doing a primality test such as the Fermat primality test or something more powerful such as the Miller–Rabin primality test.
These primality tests fail for certain numbers called pseudoprimes.
My research focuses on analyzing the class of Perrin-type primality tests- Lucas-Type Sequence:
Each primality test is a sequence with two parameters,
If p is prime, then we have the congruence
Understand prime numbers better and find ways to reduce number of pseudoprimes in probablistic primality tests.
Visualize the quality of primality tests through the number of pseudoprimes up to N and the sizes of the smallest pseudoproimes. Analyze the extreme cases that stand out from the visulizations and try to find patterns among them.
The least promising case I found by looking at the number of pseudoprimes is the Lucas-type sequence (5, 2), which has 53 pseudoprimes up to 1,000. However, I found that this number can be reduced to 0 by restricting to 3-rough numbers. The best case I found is Lucas-type sequence (8, 6), which only has 4 pseudoprimes up to 1,000. It still has one pseudoprime after being restricted with 3-rough. This showcases that one primality test that initially looks worse than the other might actually be a better primality test after excluding pseudoprimes that are divisible by 2 or 3. I then generated more general graphs to compare primality tests after restricting to 3-rough pseudoprimes.
A lot of pseudoprimes are divisible by 2 and 3, and thus we can improve the primality tests by adding a condition to restrict to 3 rough numbers after each primality test.