Most modern semiconductor volatile memory is either Static RAM (SRAM) or dynamic RAM (DRAM). Volatile memory is memory that losses it’s content when power is removed. SRAM retains its contents as long as the power is connected. Interfacing to SRAM is easier than DRAM but uses a half dozen internal transistors to implement, and this means that it consumes much more power than DRAM. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. The refresh involves periodic reading of memory control, from each cell, and then rewriting that content back into the cell. DRAM has slower read/write times than SRAM.
Main computer memory usually consists of DRAM, mainly because of the power requirements. SRAM is used for their cache memories mainly because of its fast read/write times. SRAM is also commonplace in small embedded systems, which might only need small amounts of memory. SRAM is more expensive than DRAM.
There are a number of volatile memory technologies on the horizon that might replace or compete with SRAM and DRAM. These include Z-RAM, TTRAM, A-RAM and ETA RAM.
Above a DIMM or dual in-line memory module comprises a series of dynamic random-access memory integrated circuits.
DIMM slots are usually placed very close together. DIMM slots are often pair-by-pair color coded, however. Generally, the pairs of slots must be filled together for best performance or to work at all, in some cases.
You use DRAM to expand the memory in the computer because it's a cheaper type of memory. Dynamic RAM chips are cheaper to manufacture than most other types because they are less complex. Dynamic refers to the memory chips' need for a constant update signal (also called a refresh signal) in order to keep the information that is written there.
Asynchronous DRAM is characterized by its independence from the CPU's external clock. Asynchronous DRAM chips have codes on them that end in a numerical value that is related to (often one tenth of the actual value) the access time of the memory. Access time is essentially the difference between the time when the information is requested from memory and the time when the data is returned. Common access times attributed to asynchronous DRAM were in the 40- to 120-nanosecond (ns) vicinity.
Because asynchronous DRAM is not synchronized to the frontside bus, you would often have to insert wait states through the BIOS setup for a faster CPU to be able to use such memory. These wait states represented intervals that the CPU had to mark time and do nothing while waiting for the memory subsystem to become ready again for subsequent access.
Common asynchronous DRAM technologies included Fast Page Mode (FPM), Extended Data Out (EDO), and Burst EDO (BEDO).
Synchronous forms of RAM are the only types of memory being installed in mainstream computer systems today.
Double data rate (DDR) SDRAM earns its name by doubling the transfer rate of ordinary SDRAM by double-pumping the data, which means transferring it on both the rising and falling edges of the clock signal. This obtains twice the transfer rate at the same FSB (Front Side Bus) clock frequency. It's the increasing clock frequency that generates heating issues with newer components, so keeping the clock the same is an advantage. The same 100MHz clock gives a DDR SDRAM system the impression of a 200MHz clock in comparison to an SDR SDRAM system.
For marketing purposes and to aid in the comparison of disparate products (DDR vs. SDR, for example), the industry has settled on the practice of using this effective clock rate as the speed of the FSB.
There is always an 8:1 module-to-chip (or module-to-FSB speed) numbering ratio because of the 8 bytes that are transferred at a time with 64-bit processors.
DIMM slots are usually black and placed very close together. DIMM slots with pair-by-pair color coding can be observed these days, however. Generally, the pairs of slots must be filled together for best performance or to work at all, in some cases.
Memeory Error Checking and Correction ECC) is used to find errors in storing and retriving data from memory. If memory supports ECC, check bits are generated and stored with the data. An algorithm is performed on the data and its check bits whenever the memory is accessed. If the result of the algorithm is all zeros, then the data is deemed valid and processing continues. ECC can detect single- and double-bit errors and actually correct single-bit errors.
Direct memory access, or DMA, lets a device bypass the CPU and place data directly into RAM. To accomplish this, the device must have a DMA channel devoted to its use. All DMA transfers use a special area of memory set aside to receive data from the expansion card (or CPU, if the transfer is going the other direction) known as a buffer. The basic architecture of the PC DMA buffers is limited in size and memory location.
The above graphic shows the traditional data path from dish to Ram , which would entail a trip through the CPU. DMA allows the disk drive to hand off data directly to RAM.
Here video coming out of a graphics card is being sent directly to a card for output out the computer with no help from the CPU.
No DMA channel can be used by more than one device. If you accidentally choose a DMA channel that another card is using, the usual symptom is that no DMA transfers occur and the device is unavailable. Certain DMA channels are assigned to standard AT devices. Advances in technology have reduced DMA's popularity, but it's still used by floppy drives and by some keyboards and sound cards. The floppy disk controller typically uses DMA channel 2. A modern system isn't likely to run short on DMA channels because so few devices use them.
Direct memory access (DMA) is usually turned on by default for devices such as hard disks and CD or DVD drives that support DMA. However, you might need to turn on DMA manually if the device was improperly installed or if a system error occurred.