Workshop on Benchmarking Machine Learning Workloads
March 1, 2020
Workshop on Benchmarking Machine Learning Workloads
To be held along with the International Symposium on Performance Analysis of Systems and Software (ISPASS)
Boston, Massachusetts, USA
April 5, 2020
Submissions Due: March 1, 2020
With evolving system architectures, hardware and software stacks, diverse machine learning (ML) workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload-to-system mappings. We welcome all novel submissions in benchmarking machine learning workloads from all disciplines, such as image and speech recognition, language processing, drug discovery, simulations, and scientific applications. Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads based on their interaction with hardware; (iii) which novel aspects of hardware, such as heterogeneity in compute, memory, and networking, will drive their adoption; (iv) performance modeling and projections to next-generation hardware. Along with selected publications, the workshop program will also have experts in these research areas presenting their recent work and potential directions to pursue.
We solicit short/position papers (2-4 pages) as well as longer-full papers (4-6 pages). Submitting a paper to the workshop will not prevent you from submitting the paper in the future to a conference; there are no official proceedings. So the workshop provides an ideal ground for getting early feedback on your work!