by Mithuna Thottethodi and T. N. Vijaykumar on Jan 22, 2019 | Tags: Accelerators, Machine Learning, Specialization
The GPGPU’s massive multithreading is unnecessary for DNNs, and imposes performance, area, and energy overheads. By avoiding such multithreading, the TPU is more efficient.
Read more...
by Nilesh Jain, Omesh Tickoo, Ravi Iyer on Dec 6, 2018 | Tags: Architecture, Machine Learning, Vision
The tremendous growth in visual computing is fueled by the rapid increase in deployment of visual sensing (e.g. cameras) in many usages ranging from digital security/surveillance and automated retail (e.g. smart cameras & analytics) to interactive/immersive...
Read more...
by Reetuparna Das and Tushar Krishna on Sep 17, 2018 | Tags: Accelerators, Machine Learning, Specialization
While the concept of hardware acceleration has been around for a while, DNN accelerators are perhaps the first to see the light of commercial adoption due to the AI/ML wave. Giant software corporations, veteran hardware companies, and a plethora of start-ups have...
Read more...
by Yuan Xie on Jul 12, 2018 | Tags: Accelerators, Emerging Technology, Machine Learning, Memory, Near Data Computing, Specialization
A previous blog titled “Blurring the lines between memory and compute” by R. Das was a nice summary of the history and the recent trends on addressing the memory wall challenges with process-in-memory (PIM) ideas. This blog would like to further highlight...
Read more...
by Cliff Young on May 3, 2018 | Tags: Benchmarks, Machine Learning, Measurements, Specialization
It’s a marvelous time in computer systems. For me, working in Deep Learning feels like living through a scientific revolution. Kuhn described this kind of change in his classic book, where a new paradigm takes hold, causing entire fields to change their standard...
Read more...