Call for Participation:

Tutorial on Hardware Architectures for Deep Neural Networks @ MICRO

Early Registration Deadline
September 15, 2017
Registration Deadline
October 15, 2017

Tutorial on Hardware Architectures for Deep Neural Networks
co-located with MICRO 2017
Boston, USA
October 15, 2017

Speakers: Joel Emer (Nvidia/MIT), Vivienne Sze (MIT), Yu-Hsin Chen (MIT)

Deep neural networks (DNNs) are currently widely used for many AI applications including computer vision, speech recognition, robotics, etc. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems.

In this tutorial, we will provide an overview of DNNs, discuss the tradeoffs of the various architectures that support DNNs including CPU, GPU, FPGA and ASIC, and highlight important benchmarking/comparison metrics and design considerations. We will then describe recent techniques that reduce the computation cost of DNNs from both the hardware architecture and network algorithm perspective. Finally, we will discuss the different hardware requirements for inference and training.

Additional info at the tutorial web site.