If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Computer science theory

what is our memory limit? How can save time at the expense of space? Created by Brit Cruise.

## Want to join the conversation?

• Consider the following scenario:
1 - Since pre-calculated array of primes is immutable the 14.7 MB could be stored in the flash drive (Which have 2 GB of available space, even if we only have 1% quota for that drive it would give us around 20.48 MB. More than enough space).
2 - Since we walk the array in only one direction, from the beginning to end, one index at a time, instead of loading all of the array into RAM memory we could break the array in chunks and load the chunk of the array we need only when we need it (that's is called lazy load).
The code to lazy load a chunk of the array is relatively simple and could be part of the operating system (stored in the ROM).
So... We will need a few number of additional steps (let's say 2 to 3) to load the next chunk of the array every N steps, where N is the size of the chunk. Every time we reach the end of the chunk we load the next part.
And (to be fair) also will also use 1 additional byte to store the number of the chunk we are working now to know the get the next one.
So the amount of RAM memory required by the chunk would be actually the size of the chunk times 24 bits.
Say we decide N = 1,048,576 (1024^2). In that case the chunk would occupy exactly 3 MB of RAM. An amount inside our limit.
Our maximum index is 5,166,823 so we would need to reload the chunk 5 times (5,166,823 / 1,048,576 = 4.92... => 5 rounded up).
That would required 10 to 15 extra steps in total which, in my opinion, could be considered acceptable in a matter of performance and probably will not affect much the computational time.
The size N would determine how many times a new chunk would need to be loaded and so the number of additional step.
In our case the optimal size would be N = 3,145,728 (3 * 1023^2) which would occupy 9 MB of RAM (still within our limit) and required only one reload operation.
What do you think about that?
• I applaud your technical prowess! Bravo! However, you are making the assumption that we can do a lazy load - which really just moves memory around. If suddenly we find out that we have to deal with primes up to 100 digits, this method will fail, along with any method based on trial division.

To solve this problem it will require some new mathematics. ;-)
• I've been reading a lot recently about Deep Blue's historic win over former world champion Garry Kasparov. Would the same principles of efficiency, time space tradeoff, and binary memory apply to another extremely difficult calculation with billions of possibilities, like a favorable chess position? Also, IBM has said that Deep Blue mainly used brute-force to calculate every position possible in, say, 10 moves. Is this even possible? Also, Deep Blue used over 20 chips specially designed to play chess to beat Garry Kasparov in 1997. However, in 2009, a chess engine called Hiarcs 13 has a FIDE rating of around 2500, only slightly less than Deep Blue (still enough to crush me or you mercilessly), but Hiarcs 13 runs on an unmodified HTC smartphone. Is this because of the increased efficiency of the program, or is it because of Moore's Law?
• Chess programs look at future moves and evaluate how good each position is for each player. The evaluation of the position involves searching several moves into the future to come up with a score for the position. It is possible, and often likely, that during a game a position will need to be evaluated more than once.

This offers a classic time space trade off. After we evaluate a position, we can store the result, and save the time required to calculate it in the future. This improvement in speed comes at the price of space.

Notably in chess, the number of possibilities at the front and end of a game are relatively limited, in comparison to the explosion of possibilities in the middle part of a game. Thus, 10 moves at the start of a game is not comparable to 10 moves in the middle of a game.

How newer chess programs perform so well on regular computing devices:
-Pruning: eliminate choices that don't look promising from searches early on to save time. (Deep Blue avoided doing a lot of pruning, because pruning can accidently eliminate good positions too)
-Opening move tables: tables which contain optimal opening move sequences
-End game tables: tables which find positions where a player can force a mate
-Refutation tables: tables which identify promising looking moves that, in fact, will end up to be bad moves in the future
-Time limits: Deep Blue had to work under tight time constraints, which we typically don't force our programs to work under
-Moore's Law

Hope this makes sense
• http://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuffing we can use this to reduce the space need to store the data in memory to 26 bit for the large number and 3 bit for the smallest.
• Correct! However in the limiting case (when we need to list primes up to several hundred digits or more) even if each prime was compressed, the list would still require more space than the visible universe. So this method does help us with small input, but it doesn't solve the problem for large input.
• This Question is mostly about Technoligy. I just thought of this a while ago. Now say you have a time machine that can go only into the future. So you use you time machine to go 100 years in to the future. Once there you see floating cars and you see that you are on a floating city in the clouds. But had the world actually changed that much? Now let me explain. As new problems arise we make something to fix it. Right? Yes I think that's true most of the time. As we go along we keep fixing whatever problems arise. Soon we arive at floating cars and and cities in the clouds. Then we keep fixing more problems. Let me explain again(This is really hard to explain) in a better way. Say you have a computer hardrive. This is very old than new ones come in you buy one. It does the same function. Right? Yes, yes it does. But soon this drive becomes the new normal. This goes on for thousands of years. Until traveling to the edges of the universe becomes the new normal. Yeah sure the world has changed but everything's still normal. So has it really changed all that much? This was very hard for me to put in words. Please can someone answer it? I didn't know exactly where to put this question.
• It hasn't changed at all, in terms of normalcy. You are exactly right, as humanity progresses, it redefines 'normal' along the way. Just 50 years ago, someone might have seen a CD player, and they would have been astounded at the height of future technology. Now, we think that CD's are old fashioned, and use MP3's instead. An iPhone 1 might have been seen as magic one hundred years ago, but now it is just an outdated model.
• Why can't we use bits of different shapes or compositions, so as we are able to code numbers in base 3 or higher? Surely it would be more efficient?
• I would argue that the fundamental reasons we use binary, as opposed to a different base are:
- all the operations performed by our modern computers (by their arithmetic logic units (ALUs)) are made up of logic gates (which are transistors) which perform, AND, OR and NOT operations. These operations take true and false as inputs and return true or false as an output. True and false easily map onto 0 and 1 used in binary, but would not map well onto base 3 or higher. We have been able to efficiently make these logic gates faster and faster and smaller and smaller throughout the years.
-Computers are multipurpose, so in order to perform tasks on a computer beyond just calculations (add, subtract, multiply, divide) we want to use logic gates, because they are able to take any possible set of inputs and produce any possible set of outputs (they are "functionally complete").
-A competing technology (super small and super fast) that takes 2 inputs with more than 2 states and produces an ouput with more than 2 states has not been developed.
• I know about RAM (Random Access Memory), ROM (Read-Only Memory, for system data), but not flash memory. Can someone explain to me what it is?
• Flash memory is a type of technology used to hold memory.
It's basically of the hard-drive that he is talking about in the video when he uses the term.
If I'm not mistaken flash memory it's what is used in usb sticks and memory cards.
(1 vote)
• Why does NASA need the primes on mars? for aliens or what?
• How do programs crash.
• A program crashes when the content of the program is to much for your browser or program creator.
• To the problem with bits... Theoreticaly if you could manipulate every atom in observable universe as a bit, you can store number 2 to the 10 to the 80 power which would possibly store all of these primes i think... Am I wrong?
• you're going to give up your atoms though (joking)
(1 vote)
• Quantum "computing" systems may bypass encrypted information as well as eliminating the point A to point B data Pathways. Looks like communications secrecy may become almost impossible as quantum systems are introduced. Entanglement is only the very top of the iceberg. Something closely related to telepathic communications is where this is headed. maybe hard to believe but so was the idea of television, 200 years ago.
• Quantum computers do not make communications secrecy almost impossible.

For symmetric key ciphers:
-One time pad will be unaffected.
-Symmetric key ciphers only have there keys size reduce in half (solution double the key size)

For public key ciphers:
-RSA and Diffie Hellman will be broken
-Any cipher that relies on prime factoring or discrete logs being hard will be broken
-Public key ciphers that won't be broken are those based on:
-lattice problems (NTRU relies on these)
-error correcting codes
-multivariate polynomials over finite fields
-other hard problems
(1 vote)