Memory is the core of logic – be it human or machine, we can’t process anything unless we have a place to store data, and that’s why memory has always been one of the core components in computer design. When we talk about memory, most of us assume that we are referring to RAM but that’s not how things actually started off.
So today’s article would be covering a brief history of RAM – how it evolved and what are the basic types of memory we use today like DDR3 DRAM etc. I’d also be doing a comparison between some of the future RAM technologies like Z-RAM or TT-RAM.
A Quick look into the history books
Early computers had a completely different concept of memory from the one we use today. Most of the people (who have studied Computer Science) would know that they employed an electrical device called the Vacuum Tube – something similar to what we have in CRT monitors and televisions. Then came the era of the transistors – which were created by Bell Labs.
The transistor became the core component of modern day memory, which started off with simple Latches – a circuit configuration of transistors which can store 1 bit of data. Latches evolved in Flip-Flops, which could be packed together to form Registers used in most static memory cells today. Another approach tied a transistor with a capacitor which allowed smaller and more compact dynamic memory.
Basic Types of Memory: SRAM and DRAM
Memory can easily be classified into two major categories, Static RAM, and Dynamic RAM. Like I said above, Static RAM uses a special arrangement of transistors to make a flip-flop, a type of memory cell. One memory cell can store 1-bit of data. Most modern SRAM cells are made of six CMOS transistors, and are the fastest type of memory on planet Earth.
In contrast, Dynamic RAM lines up one transistor with a capacitor to create an ultra compact memory cell. On the flip side, the capacitor needs to be refreshed after a specific period to keep the charge in the capacitor, which introduces a latency in memory access. Something we refer to as memory timings.
While DRAM has an obvious size advantage over SRAM, its speed can’t even get close to those offered by static memory cells (because they don’t need to be refreshed and are always available). That’s why faster memory is always made out of SRAM cells – like Registers in the CPU and Caches used in numerous devices. But thanks to much higher space requirements, SRAM is expensive and can’t be used as primary memory.
DRAM on the other hand is quite dense, and therefore is employed in most places which don’t require instantaneous access but large capacities – like main memory in a computer.
Asynchronous and Synchronous RAM
RAM can also be classified by functionality. Everyone knows that electronic devices work on switching voltages or pulses which we call the system Clock (the rate of which we call the Frequency or Clock Speed).
Synchronous RAM can only send or receive data when a clock pulse enters or leaves the system. I’ll explain this more in detail later on. Asynchronous RAM can be accessed at any time during a clock cycle, which present an obvious advantage over Synchronous RAM.
Single Data Rate SDRAM
SDR SDRAM is virtually obsolete now as far as the computer industry is concerned. It was one of the first memory architectures to support Synchronous Memory architectures and was only known as SDRAM at its time. Single Data Rate means that it can transfer one machine word (16 bits for the x86 architecture) of data during one clock cycle. It was widely used in the 90s era for computer systems up till the Intel Pentium III.
Common SDR memory standards included PC-100 and PC-133 which ran on clock speeds of 100MHz and 133MHz respectively.
Double Data Rate SDRAM
Also known as DDR memory, it was the direct successor to the single data rate SDRAM architecture. DDR improved upon the SDR design by providing double the data during one clock cycle: One word of data during the positive edge and one word of data during the negative edge of the clock pulse. This provided a significant increase in performance over the traditional architecture. DDR memory was primarily used in the Intel Pentium 4 and the AMD Athlon architectures.
For marketing purposes, DDR memory clocks have always been promoted at speeds twice their original value. For example, common memory standards for DDR included DDR-200, DDR-266, DDR-333 and DDR-400 which actually had respective clock speeds of 100MHz, 133MHz, 166MHz and 200MHz.
DDR2 SDRAM
The DDR standard gained a huge following and was subsequently improved to address high-performance memory needs. Improvements were made in memory bandwidth, clock rates, and voltages. This resulted in notable improvements in overall system performance. DDR2 was standard for most chipsets running Pentium 4 Prescott and later including Intel Core, and AMD Athlon 64.
Common memory standards for DDR2 were DDR2-400, DDR2-533, DDR2-667, DDR2-800 and DDR2-1066. All the modules operate at half the frequency just like in DDR.
DDR3 SDRAM
The DDR3 specifications were finalized in 2007, and primarily increased the clock rates possible while reducing the voltages. Unfortunately however the latencies also increased significantly so there were only 2-5% performance gains in real world applications compared to DDR2 (only on architectures that support both standards). Though DDR3 is the logical next step because the latest AMD and Intel platforms (790/AM3 and X58/P55) only support DDR3 memory.
Common memory standards for DDR3 today include DDR3-1066, DDR3-1333, DDR3-1600, DDR3-1800 and DDR3-2000.
Other Technologies
Along with DDR, certain other memory standards also emerged that failed to capture the mainstream market due to their performance and cost ratios. Most notably, Rambus DRAM (also called RDRAM) which were used in the likes of Nintendo 64 and PlayStation 2 along with some early models of Pentium 4. Its successor XDR SDRAM is used in the PlayStation 3 console but not adopted in any mainstream computer architecture.
Future technologies like Z-RAM, TT-RAM and A-RAM offer a new approach for dynamic memory cell construction which only need one transistor to store 1-bit data and provide speeds which are equivalent to Static RAM. They work on a principal of Floating Body Effect which occurs as a side effect of Silicon on Insulator manufacturing process. AMD is already researching this technology to use in future CPU designs.
With feature sizes getting smaller and smaller, it becoming impossible to go further with the transistor-capacitor memory architecture because capacitors can’t really shrink that much. The next logical step in Computing Memory evolution is to make the jump in Nanotechnology and work on molecular levels. Of course don’t expect that to happen anytime in the next 5 years because we still have ample supply of memory (which is actually more than we really need).
0 comments:
Post a Comment