If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Summary (what's next?)

Why is factorization hard, yet generating primes easy? Where do we go from here? Created by Brit Cruise.

## Want to join the conversation?

• does it take the same amount of time for a computer to calculate 7+4 as to 700+400?
if not how does it differ?
• Yes it takes. One fundamental reason to this would be that you can express 7 and 4 with 3 bits but you need 10 bits for 700 and 9 bits for 400.
This means a longer stream of bits must pass through transistors to make the calculation.
This difference is insignificant, like really insignificant.
• cant people find a prime by first finding another prime,n, and then finding (2^n)-1?
• Nope. The number 2047 ((2^11)-1) factors to 23 * 89. This approach is seductive, because the first few primes do generate more primes, but as n becomes larger, the resulting numbers are usually not prime. These numbers are known as Mersense primes. More about them at http://en.wikipedia.org/wiki/Mersenne_prime.
• Instead of defining the step size by using prime numbers, can we use other well-known sequences that may also work, such as Fibonacci numbers or any sequence that meets Brown's Criterion?
• A big thing in the news right now is that NASA, Google and others are buying some of the world's first Quantum computers, which process data in Qubits instead of traditional bits. According to Wikipedia, a Qubit is able to be in either of the classic binary positions or a superposition of both; which I've (perhaps incorrectly) imagined to be a method of calculation involving levels of certainty rather then absolute values. Any ideas how this new technology might effect problems like finding Primes and Prime Factorization?
• I'm not entirely sure how use of qubits would change how processing works, but for memory storage, it would allow us to use the only-hop-on-primes methed, because where 12 bits would hold 1 integer less than 2^12, 12 qubits would be able to store all integers less than 2^12.
• What if you used the sieve of Eratosthenes WHILE generating primes? You would mark all numbers that are divisible up to a certain point, then when it comes to the "certain point", just act as if the number were prime and increase the point. Or, store the primes and do a burst of checks. I will make a program to see if it works.
• What was the largest prime number ever found?
(1 vote)
• You can Google it. It has approximately 17 million digits.
• So... was thinking about this problem a bit. I came up with something slightly more efficient I think... but it still wasn't sufficient for the really large numbers. First few steps involved doing a mod 5 and then a mod 2. The mod 2 was to determine that it was an odd number and the mod 5 was used to remove all numbers ending in 5. This really only helps cut out numbers if they are not prime from being checked more than necessary. This would remove all numbers ending in 0, 2, 4, 5, 6, and 8. Preventing 60% of all numbers being checked more than necessary. Only a slight improvement over only checking odds. Also, this actually decreases performance slightly on the numbers that really matter.
Instead of increasing each step by 1 to test numbers I increased by 6 and then tested with value+1 and value-1 for the following reason. All prime numbers greater than 3 are either 6n+1 or 6n-1. This is because all 6n+2 are divisible by 2, all 6n+3 are divisible by 3, and all 6n+4 are divisible by 2. This leaves 6n+1 and 6n-1 cutting down the number of values tested with to 2/6 or 1/3 of all values between 3 and square root of n. Using this method you will end up checking some values that are not prime but it should reduce the complexity significantly over other methods that only check primes and don't use a look up table.
I was thinking... maybe there might be a way to skip numbers while increasing the test value to decrease time but also decrease effectiveness (decreasing probability of being correct from 100% to maybe something slightly less). Also, maybe the amount of time could be shortened by using an array for larger numbers where the prime density is lower so you cover a larger range for memory usage say square root of your maximum input down to an acceptable memory limit and then for below this range use the above 6n+1 and 6n-1. This would probably cut out a lot of unnecessary checking on larger numbers. Do any of these sound like viable solutions to reduce the time?
• Yes you are on the right track! I was going to post a video on 'wheel factorization' (http://primes.utm.edu/glossary/page.php?sort=WheelFactorization) Basically, we can first check if it's divisible by the first few primes (say: 2,3,5), then we imagine a wheel of size 2*3*5=30 rolling along the number line. The spokes of this wheel are all numbers relatively prime to 2,3,5. Wherever the spokes land, we assume is prime and use as next check. We can make larger wheels, for example starting with 2,3,5,7 will eliminate over 77% of the composites. Eventually larger wheels don't payoff due to memory (good ol' time space tradeoff)

So these methods, while more efficient, won't solve our problem
(1 vote)
• Any chance of getting videos on pollard's p-1 and rho algorithms? How about the Chinese Remainder Theorem? Also something on WEP and why it's so easy to break. Keep up with this awesome series!
• Thanks for the advice/feedback Richard. WEP is coming soon and Chinese remainder is also on the TODO list. Won't be posting updates for a few months as I'd like to finish information theory first
(1 vote)
• Now why is calculating how many primes in a certain space so hard? Why can't you just calculate on the prime spiral how many are in a certain space and double it to get the number you need?