Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

The world of storage is changing rapidly. We have exciting new technologies such as Intel DC Persistent Memory, along with new applications such as machine learning and blockchain which place new requirements on storage systems. Researchers are working on new technologies such as glass storage and DNA storage, and these technologies are expected to become commercially viable over the next decade. It is an exciting time to be a storage researcher! 

New applications for storage 

Modern applications stress storage systems in new ways. For example, consider training machine-learning models. This workload has a regular pattern: the same data is read in each epoch (sometimes in random order), pre-processed, and then used for training. However, the extremely fast processing power of GPUs poses a problem: the data is consumed much faster than it can be fetched, leaving expensive GPUs idle. Recent work like Quiver and Hoard propose domain-specific caching systems  that aim to deliver data efficiently to a cluster of machines training machine-learning models. 

To provide another example, blockchains such as Ethereum require authenticated storage. Every read of a value should return a proof indicating that the value is correct. Supporting authenticated storage is challenging: for example, Ethereum uses the RocksDB key-value store to store its authenticated data, but this results in significant read and write amplification. Verifying Ethereum transactions and spinning up a new Ethereum node are bottlenecked by IO, as they involve reading and authenticating data in the critical path. Developing storage systems for platforms like Ethereum is an active area of research

On this blog, Spryos Blanas has talked about how scientific computing applications are bottleneck by IO instead of by CPU. Storage devices are often the slowest part of these systems, and applications access storage in non-optimal patterns (for example, by doing random reads to millions of small objects). Scientific computing seems to be resolving these problems by moving from POSIX file systems to transactional object stores, but many open problems remain in this area.    

New storage technologies 

Persistent Memory. In 2019, we finally saw the long-awaited persistent memory reach the market with the introduction of Intel DC Persistent Memory. This storage-class memory has latencies 2–4x that of DRAM while providing bandwidth 1/6th to 1/3rd of DRAM.  These specs make persistent memory significantly faster than even enterprise solid-state drives available today. Interestingly, persistent memory is accessed similar to DRAM, in units of cache lines, rather than the 4096-byte blocks used in hard drives and solid-state drives. Persistent Memory can be used either as memory (giving the illusion of a large amount of DRAM) or storage (on which you could build file systems). The introduction of persistent memory opens up a number of interesting research problems. 

Glass Storage. While persistent memory has already hit the market, a technology that is about a decade from commercialization is glass storage. Microsoft Research is working with the University of Southampton to develop this new technology, where information is stored by using lasers to create small 3D nanostructures inside a block of fused silica. This project is motivated by the increasing amount of data stored online — Microsoft Azure needs a storage medium that is cheaper and higher performance than tape. Glass storage promises high density (about 50 TB on a sliver of glass) and a long shelf-life: the data does not rot for over 1000 years. Machine learning is used to read and write from glass storage, and Microsoft is working on building an end-to-end storage system using the glass media. While it could be a while before this technology is commercialized, it is expected to significantly impact how archival storage systems are built by the cloud providers. 

DNA Storage. A technology that is even further out from commercialization than glass storage is DNA storage. Microsoft Research and the University of Washington at Seattle are working on this project, which seeks to map data into DNA nucleotide sequences. The resulting storage medium has extremely high density: 1 exabyte per cubic millimeter. The data is preserved for 500 years. While DNA storage provides the highest density of any known storage medium, the access latency is high, requiring tens of hours to days. For DNA storage to become a viable archival storage medium, the access latency problem has to be solved: typically users want their archival data retrieved in the order of minutes to an hour. 

New challenges for architects and systems designers

What are the challenges posed by these new applications and storage technologies? It is too early to consider the challenges of DNA and glass storage, but here are a few of the challenges of persistent memory:

The line between memory and storage is blurring. Persistent memory is accessed via page tables, similar to DRAM. Setting up page tables is one of the big costs of accessing persistent memory. While software techniques can help delay or amortize this cost, fundamentally solving this problem will require architectural innovation. Another big pain point in persistent memory is creating huge pages; current hardware and software make it challenging to obtain huge pages on a system where memory is constantly allocated and deallocated (see the Discussion section in SplitFS). Changing the architecture so that software can reliably obtain huge pages will have a significant impact. 

Ensuring reliability. Traditional storage systems have been accessed via the system call interface. This provides a convenient point at which read and write requests can be intercepted. However, persistent memory is accessed directly using load and store instructions (similar to DRAM). It remains an open question as to how to provide reliability solutions on top of persistent memory without sacrificing performance. See NOVA-Fortis, Pangolin, and Vilamb for some interesting work in this space. 

Building efficient end-to-end systems. For a long time, the slowest part of an application or storage system was the storage itself. With the introduction of persistent memory, this is no longer true: for example, network stacks introduce more latency than storage access. Concurrency primitives such as locks introduce significant overhead in comparison to the latency of accessing persistent memory (see RECIPE, MOD, and FAST and FAIR for three examples of building efficient data structures for persistent memory). Reducing the overhead of these other components so that the end-to-end system is able to take advantage of persistent memory still remains a challenge (see AsyncNVM, HotPot, and Orion).

This article does not cover all current challenges in storage systems – such a list would be prohibitively long for this format. But the introduction of new memory and archival technologies is rekindling a lot of interest in storage systems, and causing systems designers and architects to rethink assumptions which have held for decades. 

About the author: Vijay Chidambaram is an Assistant Professor in the Department of Computer Science at the University of Texas at Austin. His research group, the UT Systems and Storage Lab, works on all things related to storage. 

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.