Call for Papers:

XTensor @ ASPLOS 2024

Abstract or Paper Registration Deadline
March 25, 2024
Final Submission Deadline
March 25, 2024

XTensor 1st Workshop on Cross-stack Optimization of Tensor Methods
In conjunction with ASPLOS 2024
April 27 1-5pm, 2024 @ San Diego, CA, USA

Please find details and submission link here: http://fruitfly1026.github.io/static/files/xtensor-asplos24.html
Please contact Jiajia Li at jiajia.li@ncsu.edu if you have any questions.

Tensor problems are becoming ever more important to represent data and analyze its inherent properties, as the correlation of data gains importance in many domains. The application of tensor methods covers machine learning/deep learning, quantum chemistry/physics, quantum circuit simulation, social networks, and healthcare, to name a few. The research on tensor methods comes from multiple domains, including computer architecture, programming languages, compilers, and parallel computing.  This workshop aims to gather researchers from diverse computer system backgrounds to present and communicate their work on tensor methods, and then seek a cross-stack solution to improve the performance of tensor algorithms.

Scope
XTensor is a venue for discussion and brainstorming at the intersection of software and hardware research.
Research topics includes, but not limited to:

Research angles:

  • Programming abstractions
  • Compiler techniques
  • Runtime optimization
  • Libraries/frameworks
  • High-performance algorithms
  • Hardware architecture
  • Performance model
Tensor methods:
  • Data: dense, sparse, structured, symmetric layout; random or non-negative values; etc.
  • Randomized, approximate, etc.
  • Tensor operators, such as tensor products, tensor-matrix multiplication, tensor-tensor multiplication, matrix-matrix multiplication
  • Tensor decompositions, such as Canonical polyadic decomposition (CPD), Tucker decomposition, etc.
  • Tensor networks, such as tensor train, tensor ring, hierarchical Tucker, the projected entangled pair states (PEPS), etc.
  • Tensor regression, tensor component analysis, tensor-structured dictionary learning, etc.

Platforms:

  • CPUs
  • GPUs (e.g., NVIDIA, AMD, Intel)
  • AI accelerators (e.g., SambaNova, Graphcore, Habana, GroqRack)
  • Wafer-scale system (e.g., Cerebras)
  • FPGAs
  • ASICs
  • Any kinds of simulators

Position Papers
Discussion and communication are the primary goals of the workshop. Thus, we only ask for 2-page position papers. The submitted work is very flexible in content. It could be early, in-progress research; could have not very complete experiments or results; could be a valuable survey of recent research trends or tools; could propose an important open question and call for solution; could be an experience report even with failed approaches but with some lessons learned. Submitted papers will undergo peer review by a program committee of experts from diverse research domains working on tensor problems.

Papers should follow the two-column formatting guidelines for SIGPLAN conferences (the acmart format with the sigplan two-column option) and up to 2 pages, excluding references. Review is single-blind, please include authors’ names on the submitted PDF.

Paper submission will be via EasyChair. The accepted papers will not be published in a proceeding. Presentation slides will show on the workshop website.

Important Dates
Paper submission: March 25, 2024
Author Notification: April 8, 2024
Workshop: April 27 1-5pm, 2024