Call for Papers:

Accelerated Machine Learning (AccML) @ HiPEAC 2025

Abstract or Paper Registration Deadline
November 4, 2024
Final Submission Deadline
November 4, 2024

7th Workshop on Accelerated Machine Learning (AccML)
Co-located with the HiPEAC 2025 Conference
January 21, 2025
Barcelona, Spain

https://accml.dcs.gla.ac.uk/
HiPEAC: https://www.hipeac.net/2025/barcelona/#/program/sessions/8176/

Call for contributions
The remarkable performance achieved in a variety of application areas (natural language processing, computer vision, games, etc.) has led to the emergence of heterogeneous architectures to accelerate machine learning workloads. In parallel, production deployment, model complexity and diversity pushed for higher productivity systems, more powerful programming abstractions, software and system architectures, dedicated runtime systems and numerical libraries, deployment and analysis tools. Deep learning models are generally memory and computationally intensive, for both training and inference. Accelerating these operations has obvious advantages, first by reducing the energy consumption (e.g. in data centers), and secondly, making these models usable on smaller devices at the edge of the Internet. In addition, while convolutional neural networks have motivated much of this effort, numerous applications and models involve a wider variety of operations, network architectures, and data processing. These applications and models permanently challenge computer architecture, the system stack, and programming abstractions. The high level of interest in these areas calls for a dedicated forum to discuss emerging acceleration techniques and computation paradigms for machine learning algorithms, as well as the applications of machine learning to the construction of such systems.

Topics
Topics of interest include (but are not limited to):

  • Novel ML systems: heterogeneous multi/many-core systems, GPUs and FPGAs;
  • Software ML acceleration: languages, primitives, libraries, compilers and frameworks;
  • Novel ML hardware accelerators and associated software;
  • Emerging semiconductor technologies with applications to ML hardware acceleration;
  • ML for the construction and tuning of systems;
  • Cloud and edge ML computing: hardware and software to accelerate training and inference;
  • ML techniques for more efficient model training and inference (e.g. sparsity, pruning, etc);
  • Computing systems research addressing the privacy and security of ML-dominated systems

 

Submission
Papers will be reviewed by the workshop’s technical program committee according to criteria regarding the submission’s quality, relevance to the workshop’s topics, and, foremost, its potential to spark discussions about directions, insights, and solutions in the context of accelerating machine learning. Research papers, case studies, and position papers are all welcome.
In particular, we encourage authors to submit work-in-progress papers: To facilitate sharing of thought-provoking ideas and high-potential though preliminary research, authors are welcome to make submissions describing early-stage, in-progress, and/or exploratory work in order to elicit feedback, discover collaboration opportunities, and spark productive discussions.
The workshop does not have formal proceedings.

Important Dates
Submission deadline: November 18, 2024 (Deadline extended)
Notification of decision: December 16, 2024

Organizers
José Cano (University of Glasgow)
Valentin Radu (University of Sheffield)
José L. Abellán (University of Murcia)
Marco Corner (Google DeepMind)
Ulysse Beaugnon (Google DeepMind)
Juliana Franco (Google DeepMind)