![Rethinking Data Storage and Preprocessing for ML](https://www.sigarch.org/wp-content/uploads/2021/02/AdobeStock_113578100-300x175.jpeg)
Archive of posts tagged: Machine Learning
![Rethinking Data Storage and Preprocessing for ML](https://www.sigarch.org/wp-content/uploads/2021/02/AdobeStock_113578100-300x175.jpeg)
![Multi-Modal On-Device AI: Heterogeneous Computing Once More?](https://www.sigarch.org/wp-content/uploads/2020/12/AdobeStock_371553348-300x175.jpeg)
Multi-Modal On-Device AI: Heterogeneous Computing Once More?
What is Multi-modal AI? Prior research on developing on-device AI solutions have primarily focused on improving the TOPS (Tera Operations Per Second) or TOPS/Watt of AI accelerators by leveraging sparsity, quantization, or efficient neural network architectures...![A Case for Optical Deep Neural Networks](https://www.sigarch.org/wp-content/uploads/2020/10/AdobeStock_192667656-300x175.jpeg)
A Case for Optical Deep Neural Networks
Deep Neural Networks have been a major focus for computer architects in the recent past due to the massive parallelism available in computation, combined with the massive amount of data re-use. While the proposed architectures have inspired industry innovations such...![Architecture Innovation Accelerates Artificial Intelligence](https://www.sigarch.org/wp-content/uploads/2020/09/AdobeStock_227994414-300x175.jpeg)
Architecture Innovation Accelerates Artificial Intelligence
[Editor’s Note: This article originally appeared on the blogs of the HLF and the CCC and is re-posted here with permission.] As part of the first day of the Virtual Heidelberg Laureate Forum (HLF) David A. Patterson, who won the 2017 ACM A.M Turing Award “for...![Building Performance Scalable and Composable Machine Learning Accelerators](https://www.sigarch.org/wp-content/uploads/2020/05/AdobeStock_245601545-300x175.jpeg)