Memory architecture is a key component of modern computer systems. Memory hierarchy importance increases with the advances in microprocessor performance. Traditional memory hierarchy design consists of embedded memory (such as SRAM and embedded DRAM) for on-chip caches, commodity DRAM for main memory, and magnetic hard disk drives (HDDs) or solid state disks (SSD) for storage. Technology scaling of SRAM and DRAM, the common memory technologies used in the traditional memory hierarchy, are increasingly constrained by fundamental technology limits. In particular, the increasing leakage power for SRAM and DRAM and the increasing refresh dynamic power for DRAM have posed challenges to circuit and architecture designers of future memory hierarchy designs.
Memory that never forgets. Emerging memory technologies, such as spin-transfer torque RAM (STT-RAM), phase-change RAM (PCRAM), and resistive RAM (RRAM) are being explored as potential alternatives to existing memories in the future computing systems. Such emerging nonvolatile memory (NVM) technologies combine the speed of SRAM, the density of DRAM, and the nonvolatility of flash memory, as attractive alternatives for the future memory architecture design. After many years’ foundematel research, these NVM technologies are getting mature and could possibly become main stream technologies in the near future. For example, four foundries including GlobalFoundries, Samsung, TSMC and UMC, have planned to start offering spin-transfer torque magnetoresistive RAM (ST-MRAM or STT-MRAM), possibly starting later this year. Two months ago, Everspin has begun sampling its new 1Gb STT-MRAM with lead customers, delivering a high-endurance, persistent memory with a DDR4-compatible interface. Intel and Micron have worked together to develop a NVM technology called 3D XPoint, which became available on the open market a few months ago .
Modeling NVM technologies. To assist architecture-level and system-level design space exploration of NVM-based memory, various modeling tools have been developed during the last several years. For example, NVsim is a circuit-level model for NVM performance, energy, and area estimation, supporting various NVM technologies, including STT-RAM, PCRAM, ReRAM, and legacy NAND Flash. NVmain is an architectural level simulator equipped with NVM support as well as high flexibility to support both DRAM and NVM simulation, including a fine-grained memory bank model, MLC support, more flexible address translation, and interface to allow users to explore new memory system designs.
Architectural Opportunities. As such emerging memory technologies get mature, integrating them into the memory hierarchies provides new opportunities for future memory architecture designs. Specifically, these NVM have several characteristics that make them promising as working memories or as storage memories. One characteristic is that, compared to SRAM and DRAM, these emerging NVMs usually have higher densities, with comparable fast access times. Another characteristic is that, because of nonvolatility, NVMs have zero standby power and are immune to radiation-induced soft errors. A third characteristic is that, compared to NAND-flash SSDs, these NVM are byte addressable. For example, leakage power is dominant in SRAM and DRAM arrays. On the contrary, because of nonvolatility, PCRAM or MRAM arrays consume zero leakage power when idling but consume far more energy during write operations. Hence, the tradeoffs between using different memory technologies at various hierarchy levels should be studied. For example, last year in ISSCC, Toshiba has presented a 65-nm embedded processor prototype that integrated with 512KB STT-RAM cache. The memory circuit offers 3.3-ns memory access, which is fast enough for use as cache memory, and consumes less than 1/10th of the energy that conventional SRAM does. This offers the highest power performance in the world for integrated memory.
Leveraging the nonvolatility in architecture design. The most unique advantage for such emerging memory technologies is the nonvolatility. For example, HP labs has proposed a low-overhead check-pointing device with PCRAM for future exascale computing systems, instead of using HD or SSD. The scalability of future massively parallel processing (MPP) systems is challenged by high failure rates. Current hard disk drive (HDD) checkpointing results in overhead of 25% or more at the petascale. HP’s solution leverage the nonvolatility of Phase-Change Random Access Memory (PCRAM) technology and propose a hybrid local/global checkpointing mechanism after a thorough analysis of MPP systems failure rates and failure sources. The proposed PCRAM-based mechanism can ultimately take checkpoints with overhead less than 4% on a projected exascale system. Another usage of nonvolatility is to design novel persistent memory architecture,as described by another blog “A vision of persistence”.
About the author: Yuan Xie is a professor in the ECE department at UCSB. More information can be found at http://www.ece.ucsb.edu/~yuanxie
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.