Call for Papers:

Bench 2022

Abstract or Paper Registration Deadline
July 28, 2022
Final Submission Deadline
July 28, 2022

2022 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench’22)
http://www.benchcouncil.org/bench22/index.html

Full Papers deadline: July 28, 2022, 23:59:59 AoE
Notification: September 6, 2022, 23:59:59 AoE
Final Papers due: October 11, 2022, 23:59:59 AoE
Conference date: Nov. 7th – Nov. 9th, 2022 (Virtual)

Submission site: https://bench2022.hotcrp.com/

Introduction

Benchmarks, Data, Standards, Measurements, and Optimizations are fundamental human activities and assets. The Bench conference has two essential duties: promote data or benchmark-based quantitative approaches to tackle multidisciplinary and interdisciplinary challenges; connect architecture, system, data management, algorithm, and application communities to better co-design for the inherent workload characterizations.

The Bench conference provides a high-quality, single-track forum for presenting results and discussing ideas that further the knowledge and understanding of the benchmarks, data, standards, measurements, and optimizations community as a whole. It is a multidisciplinary and interdisciplinary conference. The past meetings attracted researchers and practitioners from the architecture, system, algorithm, and application communities. It includes both invited sessions and contributed sessions.

Regularly, the Bench conference will present the BenchCouncil Achievement Award ($3000), the BenchCouncil Rising Star Award ($1000), the BenchCouncil Best Paper Award ($1000), and the BenchCouncil Distinguished Doctoral Dissertation Awards in Computer Architecture ($1000) and in other areas ($1000). This year, the BenchCouncil Distinguished Doctoral Dissertation Award includes two tracks: computer architecture and other areas. Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench’22 Conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, for each track, one among the four will receive the award for each track, which carries a $1,000 honorarium.

General Co-Chairs
Emmanuel Jeannot, INRIA, France
Peter Mattson, Google, USA
Wanling Gao, University of Chinese Academy of Sciences, China

Program Co-Chairs
Chunjie Luo, ICT, Chinese Academy of Sciences, China
Ce Zhang, ETH Zurich, Switzerland
Ana Gainaru, Oak Ridge National Laboratory, USA

Publicity Co-Chairs
David Kanter, MLCommons
Rui Ren, Beijing Institute of Open Source Chip
Zhen Jia, Amazon

Web Co-Chairs
Jiahui Dai, BenchCouncil
Qian He, Beijing Institute of Open Source Chip

Award Committees
BenchCouncil Distinguished Doctoral Dissertation Award Committee in Other Areas:
Jack Dongarra, University of Tennessee
Xiaoyi Lu, The University of California, Merced
Jeyan Thiyagalingam, STFC-RAL
Lei Wang, ICT, Chinese Academy of Sciences
Spyros Blanas, The Ohio State University

BenchCouncil Distinguished Doctoral Dissertation Award Committee in Computer Architecture:
Peter Mattson, Google
Vijay Janapa Reddi, Harvard University
Wanling Gao, Chinese Academy of Sciences

Bench Steering Committees
Jack Dongarra, University of Tennessee
Geoffrey Fox, Indiana University
D. K. Panda, The Ohio State University
Felix, Wolf, TU Darmstadt
Xiaoyi Lu, University of California, Merced
Resit Sendag, University of Rhode Island, USA
Wanling Gao, ICT, Chinese Academy of Sciences & UCAS
Jianfeng Zhan, ICT, Chinese Academy of Sciences &BenchCouncil

Call for papers

The Bench conference encompasses a wide range of areas and topics in benchmarking, measurement, evaluation methods and tools. We solicit papers describing original and previously unpublished work. The areas and topics of interest include, but are not limited to the following.

Areas:

  • Architecture: The benchmarking of the architecture and the hardware, e.g. the benchmark suite for CPU, GPU, Memory, HPC.
  • Data Management: The evaluation of the data management and storage, e.g. the benchmark specifications and tools for database.
  • Algorithm: The evaluation of the algorithm, e.g. the evaluation rules and datasets in machine learning, deep learning, reinforce learning.
  • Datasets: Evaluation of data quality, algorithms for optimizing data, and datasets used for research and benchmarking.
  • System: The testing of the software system, e.g. the testing of operating system, distributed system, web server.
  • Network: The measurement of communication network, e.g. the measurement of network in data center, wireless, mobile, ad-hoc and sensor networks.
  • Reliability and Security: The measurement of reliability and security.
  • Application: The measurement of application in medical, finance, education, etc.

Topics

  • Benchmark and standard specifications, implementations, and validations: Big Data, Artificial intelligence (AI), High performance computing (HPC), Machine learning, Warehouse-scale computing, Mobile robotics, Edge and fog computing, Internet of Things (IoT), Blockchain, Data management and storage, Financial, Education, Medical or other application domains.
  • Dataset Generation and Analysis: Research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements; Analyses or meta-analyses of existing data and original articles on systems, technologies and techniques that advance data sharing and reuse to support reproducible research; Evaluations of the rigor and quality of the experiments used to generate data and the completeness of the descriptions of the data; Tools generating large-scale data.
  • Workload characterization, quantitative measurement, design and evaluation studies: Characterization and evaluation of Computer and communication networks, protocols and algorithms; Wireless, mobile, ad-hoc and sensor networks, IoT applications; Computer architectures, hardware accelerators, multi-core processors, memory systems and storage networks; HPC systems; Operating systems, file systems and databases; Virtualization, data centers, distributed and cloud computing, fog and edge computing; Mobile and personal computing systems; Energy-efficient computing systems; Real-time and fault-tolerant systems; Security and privacy of computing and networked systems; Software systems and services, and enterprise applications; Social networks, multimedia systems, web services; Cyber-physical systems.
  • Methodologies, abstractions, metrics, algorithms and tools: Analytical modeling techniques and model validation; Workload characterization and benchmarking; Performance, scalability, power and reliability analysis; Sustainability analysis and power management; System measurement, performance monitoring and forecasting; Anomaly detection, problem diagnosis and troubleshooting; Capacity planning, resource allocation, run time management and scheduling; Experimental design, statistical analysis and simulation.
  • Measurement and evaluation: Evaluation methodologies and metrics; Testbed methodologies and systems; Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems; Collection and analysis of measurement data that yield new insights; Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks); Methods and tools to monitor and visualize measurement and evaluation data; Systems and algorithms that build on measurement-based findings; Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing); Reappraisal of previous empirical measurements and measurement-based conclusions; Descriptions of challenges and future directions that the measurement and evaluation community should pursue.

Paper Submission

Papers must be submitted in PDF. For a full paper, the page limit is 15 pages in the LNCS format, not including references. For a short paper, the page limit is 8 pages in the LNCS format, not including references. The submissions will be judged based on the merit of the ideas rather than the length. The reviewing process is double-blind. Upon acceptance, the proceeding will be published by Springer LNCS (Indexed by EI). Please note that the LNCS format is the final one for publishing. Distinguished papers will be recommended to and published by the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench).
At least one author must pre-register for the symposium, and at least one author must attend the symposium to present the paper. Papers for which no author is pre-registered will be removed from the proceedings.

Submission site: https://bench2022.hotcrp.com/
LNCS Latex template: https://www.benchcouncil.org/file/llncs2e.zip

Awards

  • BenchCouncil Achievement Award ($3,000): This award recognizes a senior member who has made long-term contributions to benchmarking, measuring, and optimizing. The winner is eligible for the status of a BenchCouncil Fellow.
  • BenchCouncil Rising Star Award ($1,000): This award recognizes a junior member who demonstrates outstanding potential for research and practice in benchmarking, measuring, and optimizing.
  • BenchCouncil Best Paper Award ($1,000): This award recognizes a paper presented at the Bench conferences, which demonstrates potential impact on research and practice in benchmarking, measuring, and optimizing.
  • BenchCouncil Distinguished Doctoral Dissertation Award ($2000): This award recognizes and encourages superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimizations community. This year, the award includes two tracks, including the BenchCouncil Distinguished Doctoral Dissertation Award in Computer Architecture ($1000) and BenchCouncil Distinguished Doctoral Dissertation Award in other areas ($1000).

Technical Program Committee

Murali Krishna Emani, ANL
Shin-ying Lee, AMD
Steve Farrell, NERSC
Krishnakumar Nair, Meta
Greg Diamos, Landing.AI
Fei Sun, Alibaba
Narayanan Sundaram, Facebook
Zhen Jia, Amazon
Shengen Yan, SenseTime
Gang Lu, Tencent
Rui Ren, Beijing Open-Source IC Academy
Bin Hu, ICT, CAS
Khaled Ibrahim, Lawrence Berkeley National Laboratory
Sascha Hunold, TU Wien
Woongki Baek, UNIST
Mario Marino, Leeds Beckett University
Bin Ren, William & Mary
Gwangsun Kim, POSTECH
Vladimir Getov, University of Westminster
Guangli Li, ICT, CAS