Memory
Our memory is responsible for allowing our computer to do all sorts of complex calculations at rates faster than a human possibly could. The more memory we have, the more things we can do at once and the faster we can do it. This applies for all of the programs that our computer has to use when it's being run, from a calculator, to a game.
We generally have two types of memory, cache memory and RAM (Random Access Memory).
Cache Memory is the faster of the two and is located closest to the CPU in our computer. The CPU is what is doing our calculations, therefore the closer we are to the CPU, the faster our calculations. The problem with cache memory is that we have a lot less of it then we do of RAM. This means we want to perform as much calculation in cache memory as we can before resorting to RAM in order to be as fast and efficient as possible. We also have several layers of cache memory, the closest one labelled "L1" while the others being L2, L3, etc. L1 is the fastest layer while the others get progressively slower.
Random Access Memory is the slower of the two and located farther away from the CPU. However you can have much more of this kind of memory, and can frequently be seen in large 1-8 Gigabyte sticks now, with multiple of them on one computer.
Now that we've got a quick understanding of memory let's talk more about what fragmentation is
Fragmentation
Fragmentation is how memory begins to break up and becomes unusable for some reason. Well what is that reason exactly? Well let's take a look at a block of memory.
Each block in this represents a byte lets say. When we create a variable we want to use in our program, let's say an integer to store a number for use later and a boolean for some true or false value, they each represent a different size of memory. An int is say, 4 bytes while a boolean is only 1 byte.
Now when we create these variables, they aren't actually done in order, they are put randomly in memory via something called dynamic heap allocation. Whenever we create a new variable, it could be put anywhere around in our memory. It could be at one end or another end of memory. Since our memory is so big, normally we don't worry about this. But we have have 1 million variables, things will get messy and things will get out of order. This is what it might look like.
This causes the following problem. Let's say we have some blocks of memory free, a 1 kb block and another 1kb block we can use. This is all we have left and we want to use a function that uses up 2 kbs of memory. Well our 2 free blocks add up to 2kb right? Everything should be good. Nope, program has memory leak error.
4 Free spaces for int, but we can't put it in still!
What just happened? Well the reason it crashed is because we can't split our function to use memory from the 2 blocks seperately. We need to have one continuous block of 2 kb, not 2 split 1 kb blocks to have our function work. This is the kind of thing that happens in fragmentation. We have all these weird gaps in our memory that become unusable by things that require more memory. When you have a a lot of different variables of different size being put in all sorts of places it means we are prone to run out of memory earlier than we should, even if we have some memory left to use.
Fixes for this!?
Contiguous Memory Allocation
There are a variety of different methods we can use in order to fix this. First off, instead of making variables and dynamically allocation them, we use techniques that will instead use a contiguous form of memory allocation. What this means is to put memory right next to each other, instead of randomly placed. Two well known methods are Stack Allocation and Pool Allocation. For the sake of not making this post too huge I won't go into great detail about both. Howevere here is a brief example for how an array, a stack allocation type works.
Let's say for example you want to make 5 integers. Instead of making 5 of them individually, you mean an array of integers. Here's a comparison of what they would look like.
As you can see, while our seperate integers would be scattered, our array is nice, neat and located in a nice little block.
Defragging
Another thing you can do is defragging. Defragging involves freeing up memory by moving all memory down to the lowest address. This means to put them all near the bottom to fill up any space, therefore filling up any holes. You would use this memory every now and then to make sure you won't get fragmentation.
One problem this presents is for pointers. (Pointers point to other memory addresses and can be used for variables as well to reference specific locations in memory or copy the location of other variables.) Since pointers point to a specific location of memory, it means when you're defragging since memory addresses are shifted down to make sure there are no holes, that means whatever your pointer was looking at before will now be either NULL, or pointing to something else now. The way to fix this is to make sure you can repoint your pointers to the correct variable which is easier said than done.
Closing words
Fragmentation is a very important concept for higher level programming. For small programs you will hardly ever use up your memory that fast so you don't have worry about this. But the larger and more complex your program gets, the more important defragging and avoiding fragmentation becomes.
No comments:
Post a Comment