Call for Papers:

On-device Intelligence Workshop at MLSys 2020

Abstract or Paper Registration Deadline
January 15, 2020
Final Submission Deadline
January 15, 2020

On-device Intelligence Workshop  in conjunction with MLSys 2020
Austin, Texas, USA
March 4th, 2020

Important Dates:

  • Submission Deadline: Jan 15, 2020
  • Decision Notification: Jan 27, 2020

Workshop Page: link

AI has the potential to transform almost everything around us. It can change the way humans interact with the world by making the objects around them “smart” — capable of constantly learning, adapting, and providing proactive assistance. The beginnings of this trend can already be seen in the new capabilities coming to smartphones (speech assistant, camera night mode) as well as the new class of “smart” devices such as smart watches, smart thermostats, and so on. However, these “smart” devices run much of the computation on the cloud (or a remote host) — costing them transmission power and response latency as well as causing potential privacy concerns. This limits their ability to provide a compelling user experience and realize the true potential of an “AI everywhere” world.

This workshop seeks to accelerate the transition towards a truly “smart” world where the AI capabilities permeate to all devices and sensors. The workshop will focus on how to distribute the AI capabilities across the whole system stack and co-design of edge device capabilities and AI algorithms. It will bring together researchers and practitioners with diverse backgrounds to cover the whole stack from application domains such as computer vision and speech, to the AI and machine learning algorithms that enable them, to the SoC/chip architecture that run them, and finally to the circuits, sensors, and memory technologies needed to build these devices.

Topics of interest are anything related to enabling smart devices including but not limited to the following:

  • Novel on-device capabilities for vision, speech, and natural language processing.
  • Distributing AI capabilities across the whole system stack from data capture at the edge to the cloud instead of performing all the compute in the cloud.
  • Machine learning for system tasks such as compression, scheduling, and caching.
  • On-device privacy-preserving learning.
  • Efficient machine learning models for edge devices.
  • Dynamic neural networks such as early termination and mixture-of-experts.
  • Platform-aware model optimization.
  • Efficient hardware accelerator design.
  • Tools for architecture modeling, design space exploration, and algorithm mapping.
  • Efficient model execution on edge devices such as scheduling and tiling.
  • Emerging technologies such as near-sensor, -memory, or -storage computing.

Organizing Committee

  • Vikas Chandra, Facebook (Program co-chair)
  • Pete Warden, Google (Program co-chair)
  • Yingyan Lin, Rice University (General co-chair)
  • Ganesh Venkatesh, Facebook (General co-chair)
  • Ariya Rastrow, Amazon
  • Raziel Alvarez, Google
  • Song Han, MIT
  • Greg Diamos, Landing.AI
  • Hernan Badino, Facebook Reality Labs

Program Committee

  • Yingyan Lin, Rice University
  • Ariya Rastrow, Amazon
  • Raziel Alvarez, Google
  • Jan Kautz, Nvidia Research
  • Zhangyang Wang, TAMU
  • Yiran Chen, Duke
  • Richard Baraniuk, Rice University
  • Sai Zhang, Apple
  • Hernan Badino, Facebook Reality Labs
  • Kiran Somasundaram, Facebook Reality Labs
  • Christian Fuegen, Facebook AI
  • Meng Li, Facebook
  • Liangzhen Lai, Facebook
  • Yu-Hsin Chen, Facebook