FastPath 2021
February 15, 2021
February 15, 2021
International Workshop on Performance Analysis of Machine Learning Systems
March 28, 2021 – Virtual
In conjunction with ISPASS 2021
Submissions Due: March 15, 2021
FastPath 2021 brings together researchers and practitioners involved in cross-stack hardware/software performance analysis, modeling, and evaluation for efficient machine learning systems. Machine learning demands tremendous amount of computing. Current machine learning systems are diverse, including cellphones, high performance computing systems, database systems, self-driving cars, robotics, and in-home appliances. Many machine-learning systems have customized hardware and/or software. The types and components of such systems vary, but a partial list includes traditional CPUs assisted with accelerators (ASICs, FPGAs, GPUs), memory accelerators, I/O accelerators, hybrid systems, converged infrastructure, and IT appliances. Designing efficient machine learning systems poses several challenges.
These include distributed training on big data, hyper-parameter tuning for models, emerging accelerators, fast I/O for random inputs, approximate computing for training and inference, programming models for a diverse machine-learning workloads, high-bandwidth interconnect, efficient mapping of processing logic on hardware, and cross system stack performance optimization. Emerging infrastructure supporting big data analytics, cognitive computing, large-scale machine learning, mobile computing, and internet-of-things, exemplify system designs optimized for machine learning at large.
FastPath seeks to facilitate the exchange of ideas on performance analysis and evaluation of machine learning/AI systems and seeks papers on a wide range of topics including, but not limited to:
- Workload characterization, performance modeling and profiling of machine learning applications
- GPUs, FPGAs, ASIC accelerators
- Memory, I/O, storage, network accelerators
- Hardware/software co-design
- Efficient machine learning algorithms
- Approximate computing in machine learning
- Power/Energy and learning acceleration
- Software, library, and runtime for machine learning systems
- Workload scheduling and orchestration
- Machine learning in cloud systems
- Large-scale machine learning systems
- Emerging intelligent/cognitive systems
- Converged/integrated infrastructure
- Machine learning systems for specific domains, e.g., financial, biological, education, commerce, healthcare