One of the most enduring abstractions in computer science is that memory is located by address, holds values for a long time, but typically not forever and is uniform in the cost of access. Various kinds of storage like disks or tape are meant for the longest term storage and, to a first order approximation, forever. Glossed over in this abstraction are a few obvious things like different performance characteristics and size limitations.
It takes a while to learn that this model of memory is not exactly right. To get a program working, you need to know lots of other things and this simple model works well. If you are lucky, you won't have to deal with the differences between reality and the model.
The concept that memory is of infinite size is the first to go. Your first programs never need much space, but then you start down the road of saving computations, collecting data from external sources that need rapid access etc. One day, you either get an out of memory error, or you notice that your program takes way too much time because you have less real memory than the virtual memory that your program needs. Solving these problems usually takes some major surgery to the program, and anyone who has discovered these problems just before delivering will learn to do some estimating early on in the design.
The next thing you might discover is that the pattern by which the program addresses memory can change the speed of access. This is largely due to cache memory in the CPU, but could also be due to memory that hasn't been referenced in a while being paged out. Programmers who care about performance learn about how memory caches operate, but there are many different implementations with different characteristics. Unless you are lucky enough to be programming for just one implementation, your code should act reasonably well on all supported processors, which is not going to be easy given that they keep coming out with new processors. You probably need to make some benchmark programs to run tests on any processors you decide to support, and you probably need to keep up with what the chip manufacturers are planning.
In the end, you learn that memory has lots of features. It may be shared between processes, it is not uniform in its performance, it may disappear because of access permissions and it may change value from one instruction to another because of another CPU or thread.
These cases are fairly well covered in various processor manuals, device driver kits and text books. So even though we as an industry continue to see mistakes made because people don't understand these, at least they are discoverable by most programmers.
This is all background to two stories from computing days before semiconductor memories took over the market. I started computing on machines that used core memory and even worked with a machine later that used drum memory as its main memory.
Core memories work by storing state in magnets: by creating magnetic fields in the right direction with electric currents in wires. Each magnetic donut had two electric wires running at right angles thru them. The current in one wire wasn't enough to flip the magnet, but if both were on, then that donut would flip its magnetic direction. Later, to read the value, the field is set in one direction and if the magnet changes its direction, it will invoke a current in a sense wire. If the magnet doesn't change direction, no current will be created in the sense wire. SO, in this way, the memory subsystem would know which way the magnet was set. More than you want to know about this technology can be found in
http://en.wikipedia.org/wiki/Core_memory It is worth pointing out that reading the memory actually sets the values to zero, so the memory subsystem has to re-write the value. Since it takes time to write the old value, specs at the time would cite how long it would take to read the value and how long it would take before the next operation could start. On one machine I worked on, the architect also figured out that some instructions could be sped up if the re-write was aborted and the memory system waited for a new value to write. This was great for instructions like "Add to Memory". The memory location would be read, the CPU would compute the result and then give the memory system the new value to write to the location that was just read/zeroed. A documented side-effect of this was to lock out any other memory operations during this read-alter-re-write cycle. So these instructions were also used for semaphore and thread locking operations. This caused problems later when semiconductor memory was used and multiple processors were more common.
The first time I saw an actual core memory was in the basement of my father's business partner's house. He had been offered an opportunity to sell GE computers in the NYC area and he thought he should know something about them before he took the job. So he bought some surplus parts on Canal Street in Manhattan and put together a working CPU, core stack, and teletype interface. His conclusion was that computers were not as interesting as he thought so he didn't take the job. He gave me the memory pictured above. I think it is a register set, but I really don't know.
OK. Here is the first story:
Dartmouth developed a time-sharing system that was operational by mid-1964. Basic was also invented there by John Kemeny and Tom Kurtz, and John wrote the first compiler. The computer system was actually two separate computers: The GE 235 ran the compilers and user programs and the GE Datanet-30 communicated with the terminals and handled user commands like CMD.exe does on Windows. (Arnold Spielberg was the designer of the 235 for General Electric.)
The computer was available to everyone in the Dartmouth community. This was pretty radical for its time given the expense of computing, but it didn't cost that much more than a more restrictive policy and allowed for interesting non-traditional uses of the computer. The funny thing was every Monday morning when the first person signed on, the system would crash. Easy enough to recover from, but annoying and was unexplained. I believe it was John McGeachie who finally figured it out because I heard the story from him.
It turns out that a) no one used the computer on weekends and b) when the machine was idle it just executed a "jump to self" waiting for an interrupt. Why no one would use the computer on the weekends might not be understandable today, but computers back then were islands not connected to other computers. Computer games were basically unknown and really not that interesting on a 10 character per second terminal. But more importantly, Dartmouth was all-male at the time, so non-academic activities were pursued with vigor on the weekends. :-)
So the machine was idle for days days executing the same instruction over and over. The net result was that those parts of memory that should not have changed value when that memory was read (remember reading clears memory) were changed. I'm not clear on the physics of why this happened, but it was observed. The solution was to change the idle loop to also sweep thru every address in memory just reading. This spread out the magnetic fields inside the memory. On the next machine the college got, they implemented the same idea but with the addition of an instruction that that you won't see on many machines: The Gray-to-Binary instruction. It was slower than a divide instruction and didn't reference memory. This was done to cut down on memory traffic in case the other CPU or IO channels needed the bandwidth.
The second story comes from the mid-1970s:
College hires are typically given jobs that are challenging, but if they fail, they won't bring the company down with them. My former business partner started a job a Data General where he was given the job of writing the file system repair program, while someone hired at the same time as him got the memory test program.
You might think a memory test is simple. And a simple test is. But to do a good job, you should have a model of how the memory works and what kinds of errors it could have. A good memory test, when it detects an error, will give instructions on what to repair. There is tons of papers and practical knowledge for this area.
Apparently, this new college hire was better than they thought. Or perhaps they were too busy to notice that he had completed what was needed and had moved on to more extensive tests. He noticed that when the core memory flipped (this was at the end of the core era), that it vibrated a little bit. He soon figured out that if you made them flip at the right frequency, you could set up a standing wave on the wires. And for a good percentage of the memories, if you did this long enough, it broke the wires.
Management was impressed, but banned the program.
This to me is another example of how even a boring task could be made interesting if you try hard enough. And how, if you hire good people, you may not get exactly what you want.