Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

The memory and storage hierarchy deepens in modern systems. To mitigate the low performance of memory/storage devices at the bottom of the hierarchy, near-data processing has been studied across different memory and storage devices as a means to reduce access latency and data traffic. The concept has been studied in DRAM (e.g., process-in-memory), HDD (e.g., intelligent disk), SSD (e.g., Smart SSD), and disaggregated storage services (e.g., AWS S3 Select), among other storage layers. It has encountered different technical challenges and degrees of adoption in practical systems. In this article, we explore the opportunities and challenges of near-data processing in two specific layers in the hierarchy: computational storage devices and disaggregated storage services, but the conclusions we draw may also be applied to other layers in the hierarchy. We will then describe our general observations and point out some potential future research directions.

Computational Storage Devices 

Adding computation to HDD or SSD devices has been studied in research since the 1990s and 2010s, respectively, and solutions have recently been commercially available from  Samsung, NGD, and ScaleFlux. These computational storage devices (CSDs) offer great potential to accelerate applications. However, there are hardware and software limitations that limit their wider adoption, as analyzed in a recent paper. Specifically, the challenges can be summarized into four aspects: 

  • Resource management: A CSD must support multi-tenancy and manage local resources (i.e., processing, memory, and storage) across multiple users to achieve fair resource allocation and proper isolation between programs. 
  • Security: A CSD needs secure mechanisms and techniques to provide guarantees similar to host CPUs. For example, applications need proper isolation and protection from side-channel or DoS attacks. 
  • Data consistency: A system must have a clearly defined data consistency model between the host CPU and CSDs. For example, the host CPU needs to be notified if the data is modified in the CSD. Such a function is not supported in classic file systems nor common disk interfaces. Furthermore, the system also needs to efficiently handle software and hardware failures in CSDs. 
  • Usability: Debugging in a hybrid host/CSD system is challenging. A programming model is required with well specified APIs for CSDs. Such a programming model should be easy to use and, ideally, compatible with existing applications. 

The requirements noted above are not fully supported in existing HDD or SSD devices and are quite challenging to add. Furthermore, adding such support will likely incur extra performance overhead in both software and hardware. 

While support for general-purpose programs is challenging, CSDs have seen some success for special-purpose applications like databases. IBM Netezza used CSDs to push down certain operations in relational databases (e.g., filtering, projection, aggregation) to the storage devices to reduce data transfer. Oracle Exadata provided a wider array of functionality in their networked storage servers. For databases in particular, the challenges above for general programs can be significantly mitigated. For example, the data access pattern of a relational database is very regular; only leaf operators in a query plan (e.g., filtering, projection, and aggregation) need to be pushed down with very specific computation patterns. The challenges of resource management, security, and data consistency are largely handled by the database application, which reduces the design complexity of CSDs. 

Disaggregated Storage Services

Many modern cloud-native systems adopt a storage-disaggregation architecture that separately manages computation and storage to enable elastic resource scaling and reduce cost. The storage service is accessed over the data center network, and can be viewed as a layer below local HDD/SSD in the storage hierarchy. A unique feature of disaggregated storage is its ability to execute certain computations in or near the data. Examples of such systems include AWS S3 Select, Redshift Spectrum, Aqua, Azure Data Lake, AirMettle, and Minio. These storage services support certain database operations that include but are not limited to filtering, and can significantly improve the performance of data analytics applications. 

Some recent research has utilized computational features in public storage services to accelerate more complex data analytics applications. PushdownDB leverages AWS S3 Select to support more sophisticated operations like bloom join, group-by, and top-K using basic filtering and aggregation. It demonstrated that pushdown approaches can reduce the runtime by 6.7× and cost by 30%. Another iteration, FlexPushdownDB, further extends the idea to support both caching and pushdown computation at a fine granularity so that even a single table column can be partially cached and exploit pushdown to accelerate scanning for the rest of the column. FlexPushdownDB can achieve 2.2× acceleration over either caching-only or pushdown-only designs. 

In the context of storage disaggregation, the challenges that CSDs face are relatively easy to resolve, if not fully resolved already. Resource management and security are well supported in disaggregated storage services. Data consistency is well defined and accepted by existing applications, although different storage services may support different consistency models and fault tolerance guarantees. We see two reasons that make it easier to address these challenges in a storage-disaggregation system than CSDs: First, a disaggregated storage service itself is a distributed system, with its own CPUs running an entire software stack; this simplifies the implementation of new features. Second, the storage service is relatively slow compared to local disks due to the network and wide-area distribution, which reduces the performance requirements of computational functions.  

Nonetheless, pushdown programming models in storage services today are still limited. Most of the systems mentioned above support only simple SQL queries for specific data formats (e.g., CSV and Parquet). It is an exciting research direction to explore more diverse forms of pushdown computation. 

Discussion 

We observe that it can be more beneficial to performance and less technically challenging to apply the concept of near-data processing to lower layers of the memory/storage hierarchy for a specific application. In particular, we make the following observations: 

  • For a specific application, the lower layers in the storage hierarchy are more likely to limit performance if the layers are actually accessed by the application. This suggests that optimizing performance for the lower layers may have greater performance potential. 
  • The practical challenges (e.g., resource management, security, and data consistency) are easier to address in the lower layers of a storage hierarchy, such as a disaggregated storage service, as opposed to a computational storage device. 
  • Computation pushdown is more effective and easier to implement for certain  applications like relational databases and has seen success in both CSDs and storage services. It seems easier to exploit near-data processing for domain-specific rather than general-purpose applications. 

Looking forward, we see a large number of research questions that are not fully answered yet. For example, how to support more advanced database operations? How to support diverse data formats (e.g., semi-structured data, graph data, images, videos, etc.)? What are other killer applications of this technology besides data analytics? How can domain-specific hardware accelerators of pushdown be developed for higher performance and lower cost? Overall, many exciting research opportunities are waiting to be answered in the years to come. 

About the Author: Xiangyao Yu is an Assistant Professor at the University of Wisconsin-Madison. His current research interests include transaction processing and HTAP, cloud-native databases, and new hardware for databases. 

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.