Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

Earlier this month, the 9th annual Non-Volatile Memories Workshop took place on the UC San Diego campus.  This year, for the first time, the organizing committee created three awards to recognize some of the best work (new and old) in the field of non-volatile memory technologies.

The Workshop and Its New Awards

The NVMW is unique in several ways.  First, it is the oldest scholarly meeting devoted to non-volatile, solid-state memory technologies and their role in computer systems.  Second, the workshop spans a wide range of disciplines under the topic of persistent memories — the presentations are roughly split between systems and architecture topics and error correction and coding.

Finally, the workshop includes many of the best papers from conferences like ISCA, MICRO, ASPLOS, SOSP, OSDI and top information theory venues because, as a workshop, it accepts both previously published and unpublished work.

This last point, in particular, gives the NVMW an opportunity to recognize a work across the entire field (much like IEEE Micro’s Top Picks) rather than limiting the candidates to papers published in a particular conference, as conventional “best paper” and “test of time” awards do.

We took that opportunity and created three awards:

The NVMW Persistent Impact Prize (presented this year by Marvell) recognizes a paper published at least five years prior that has had exceptional impact on the fields of study related to non-volatile memories.  The award committee interprets both “impact” and “non-volatile memories” broadly.

The NVMW Memorable Paper Award (presented this year by Toshiba Memory) is awarded to a student paper published in the last two years that is of exceptional quality and is expected to have substantial impact on the fields of study related to non-volatile memories.  The NVMW gave two of these awards: One in the area of system architectures and applications and another in the area of devices, coding, and information theory.

All three awards include a $1000 cash prize and a customized, 3D printed medal.  Details on both awards are available at the NVMW website.

I’ll turn the rest of this blog post over to the award recipients to summarize their work.  I have made a few edits for clarity and length.

The 2018 Persistent Impact Prize

The inaugural Persistent Impact Prize was awarded to Mnemosyne from the University of Wisconsin.  The award committee made the award with the following citation:

Mnemosyne is awarded the 2018 Persistent Impact Prize in recognition of its contributions to the foundations of programming with non-volatile main memory.   It was among the first to identify the need to address persistent memory as part of the programming model; propose lightweight transactions to support persistence; and handle consistency under failure.  The problems it described and the solutions it proposed have influenced the design of most (if not all) of persistent main memory systems that followed it.

Haris Volos, Andres Jaan Tack, and Michael M. Swift. 2011. Mnemosyne: lightweight persistent memory. In Proceedings of the sixteenth international conference on Architectural support for programming languages and operating systems (ASPLOS XVI). ACM, New York, NY, USA, 91-104. http://dx.doi.org/10.1145/1950365.1950379

(Summary by Haris Volos)

Since the dawn of computing, computer applications have organized their data between two tiers: memory and storage. This organization has been largely motivated by the different performance and data-retention characteristics of memory and storage technologies. As memory is faster than storage, applications typically load their data into memory during processing. Memory, however, is also volatile, meaning that it retains its contents when powered on but loses its contents when power is interrupted. To work around this limitation, applications typically store their data in a persistent storage technology, such as hard disk or solid-state disk drive, for long-term retention.   Document editing is a familiar example: a word processor loads a document into memory for fast interactive editing, and periodically saves the document on disk in the form of a file for long-term retention; on the unfortunate event of a power interruption or other failure, any unsaved edits are lost.

Advances in solid-state physics over the last decade have given rise to a new breed of persistent memory technologies that provide persistent storage at memory-like performance, thus blurring the line between memory and storage. Thus, applications can now keep a single image of data in persistent memory that they can access and update in-place quickly without the need to load from and save to a file on a slow disk for long-term persistence. Although this removes the danger of data loss on the event of a power interruption or other failure, it raises the risk of data corruption.  Going back to the document-editing example, the word processor must ensure that a failure while editing a document does not result in a document where some parts correspond to a previous version of the document and other parts correspond to a newer version. Worse, the document may be corrupted and become unusable. Programmers using persistent memory thus need tools to ensure that they can safely transform data between correct versions, without risk of corruption if a failure happens during the transformation.

Mnemosyne was among the first systems to identify the above opportunity and challenges with developing applications targeting new persistent memory technologies. Mnemosyne provides programmers with the necessary tools and mechanisms to write correct and high-performance applications that update persistent data in-place without risking corruption under failure using transactions. Mnemosyne’s influence has been significant as it stimulated further inter-disciplinary research on persistent memory spanning several areas of computer science, including computer architecture, operating systems, data management systems, programming languages, and security; and provided the groundwork for nearly all of the persistent memory systems that have followed. Its design was an inspiration to the Storage Networking Industry Association (SNIA) persistent memory programming model.

The 2018 Memorable Paper Award for System and Architecture

Ana Klimovic, Heiner Litz, and Christos Kozyrakis. 2017. “ReFlex: Remote Flash ≈ Local Flash”. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’17). ACM, New York, NY, USA, 345-359. DOI: https://doi.org/10.1145/3037697.3037732

(Summary by Ana Klimovic)

Internet companies such as Facebook and Google host trillions of messages, photos, and videos for their users. Hence, they need storage systems that are massive in scale, fast to access, and cost effective. Scale is achieved by hosting internet services in datacenters with thousands of machines, each contributing its local storage to the global data pool. Speed is achieved by selectively replacing slow hard disks in machines with Flash storage devices that can serve data accesses with 100x lower latency and 10,000x higher throughput.

However, flash makes it difficult to build a cost-effective storage system. Flash devices are typically underutilized in terms of capacity and throughput due to the imbalance in the compute and storage requirements of the internet services running on each machine. In the past, datacenter operators dealt with the same challenge for disks by allowing services running on each machine to allocate storage over the network on any disk with spare capacity and bandwidth in the datacenter. Remote (over the network) access to disks allows us to utilize all available capacity and throughput. Past efforts to implement similar remote access systems for Flash devices have ran into significant challenges. Network protocol processing at the throughput of Flash devices requires a large number of processor cores and adds overheads that cancel out the latency advantages of using Flash. Moreover, when two remote machines access the same Flash device, interference between the two access streams can lead to unpredictable performance degradation.

To address these challenges, we developed ReFlex. ReFlex enables high performance access to remote Flash storage with minimal compute resources and provides predictable performance for multiple services sharing a Flash device over the network. Using a single processing core, the system can process up to 850,000 requests per second which is 11x more than a traditional Linux network storage system. ReFlex makes remote Flash look like local Flash to applications, making it easy for a service running on a particular machine to use spare Flash capacity and bandwidth on other machines in the datacenter. To provide predictable performance when multiple remote machines access the same Flash device, ReFlex uses a novel scheduler to process incoming requests in an interference-aware manner.

ReFlex is having an increasing impact in industry and, in collaboration with IBM Research, has been integrated into the Apache Crail distributed storage system. This integration allows popular data analytics frameworks to leverage ReFlex to improve their resource efficiency while maintaining high, predictable performance. ReFlex is also being ported to a system on chip (SoC) platform by Broadcom Limited. ReFlex is open-source software and available at: https://www.github.com/stanford-mast/reflex.

The 2018 Memorable Paper Award for Devices and Information Theory

Ahmed Hareedy, Homa Esfahanizadeh, Lara Dolecek, “High performance non-binary spatially-coupled codes for flash memories”, Information Theory Workshop (ITW) 2017 IEEE, pp. 229-233, 2017. http://ieeexplore.ieee.org/document/8277940/

(Summary by Ahmed Hareedy)

In order to meet the demands of data-hungry applications, modern data storage devices are expected to become increasingly dense. This is a challenging endeavor, and storage engineers are continuously trying to provide novel technologies. However, these new technologies are typically associated with an increase in the number, sources, and types of errors. For example, in Flash memories, programming errors and inter-cell interference are sources of errors which are exacerbated by increasing the density of the device. The same applies for grid misalignments, read-head flying-height variations, and inter-track interference in hard disk drives. This fact makes the goal of ensuring highly-reliable, dense storage devices a formidable challenge. Our research focuses on providing novel and efficient error-correcting coding schemes that are capable of overcoming this challenge. In particular, through informed exploitation of the underlying channel characteristics of the storage device being studied, we provide frameworks for systematically generating error-correcting codes (ECCs), with mathematical guarantees, that offer performance improvements in orders of magnitude relative to the prior state of the art. These frameworks are based on mathematical tools drawn from coding theory and information theory, and rely on advanced techniques from probability theory, linear algebra, graph theory, optimization, and combinatorics.

In this work, we focus on the design and optimization of a particular class of ECCs, namely spatially-coupled codes, for practical storage channels. Spatially-coupled codes have a compact graphical representation, and are known to provide complexity/latency gains over other codes. Additionally, spatially-coupled codes have enhanced theoretical properties. However, there is a significant room for improving the finite-length performance of these codes for different applications. We recently discovered that the nature of the error-prone structures in the graph of a code critically depends on the channel underlying the storage device for which the code is used. Here, we focus on practical Flash channels.  We propose a three-stage code design approach that aims at minimizing the number of these error-prone structures in the graph of the designed spatially-coupled codes. To perform this minimization, our approach optimizes a particular set of code design parameters at each of the three stages. Codes designed using the proposed approach achieve over two orders of magnitude and over 200% raw bit error rate performance gains compared to known techniques in the literature. We are currently working on extending the proposed approach to design high performance spatially-coupled codes for magnetic recording devices. This approach, along with our previously developed methods, constitutes a comprehensive ECC toolbox for a variety of modern storage and memory systems, including multi-dimensional storage systems.

Next Year’s Awards

We will be giving the same awards at next year’s NVMW.  Abstract submissions for candidates for the Memorable Paper Award are due in December.  Nominations for the Persistent Impact Prize will be due around the same time.  Details will be available at nvmw.ucsd.edu.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.