Thu 18 Jul 2024 12:12 - 12:30 at Pitanga - Program Analysis and Performance 2 Chair(s): Rahul Purandare

Learning and predicting the performance of given software configurations are of high importance to many software engineering activities. While configurable software systems will almost certainly face diverse running environments (e.g., version, hardware, and workload), current work often either builds performance models under a single environment or fails to properly handle data from diverse settings, hence restricting their accuracy for new environments. In this paper, we target configuration performance learning under multiple environments. We do so by designing SeMPL—a meta-learning framework that learns the common understanding from configurations measured in distinct (meta) environments and generalizes them to the unforeseen, target environment. What makes it unique is that unlike common meta-learning frameworks (e.g., MAML and MetaSGD) that train the meta environments in parallel, we train them sequentially, one at a time. The order of training naturally allows discriminating the contributions among meta environments in the meta-model built, which fits better with the characteristic of configuration data that is known to dramatically differ between different environments. Through comparing with 15 state-of-the-art models under nine systems, our extensive experimental results demonstrate that SeMPL performs considerably better on $89%$ of the systems with up to $99%$ accuracy improvement, while being data-efficient, leading to a maximum of $3.86\times$ speedup. All code and data can be found at our anonymous repository: https://github.com/anoanonymous/SeMPL.

Thu 18 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Program Analysis and Performance 2Research Papers at Pitanga
Chair(s): Rahul Purandare University of Nebraska-Lincoln
11:00
18m
Talk
Adapting Multi-objectivized Software Configuration Tuning
Research Papers
Tao Chen University of Birmingham, Miqing Li University of Birmingham
Pre-print
11:18
18m
Talk
Can Large Language Models Transform Natural Language Intent into Formal Method Postconditions?
Research Papers
Madeline Endres University of Massachusetts Amherst, Sarah Fakhoury Microsoft Research, Saikat Chakraborty Microsoft Research, Shuvendu K. Lahiri Microsoft Research
11:36
18m
Talk
Analyzing Quantum Programs with LintQ: A Static Analysis Framework for Qiskit
Research Papers
Matteo Paltenghi University of Stuttgart, Michael Pradel University of Stuttgart
Pre-print
11:54
18m
Talk
Abstraction-Aware Inference of Metamorphic Relations
Research Papers
Agustin Nolasco University of Rio Cuarto, Facundo Molina IMDEA Software Institute, Renzo Degiovanni Luxembourg Institute of Science and Technology, Alessandra Gorla IMDEA Software Institute, Diego Garbervetsky Departamento de Computación, FCEyN, UBA, Mike Papadakis University of Luxembourg, Sebastian Uchitel Imperial College and University of Buenos Aires, Nazareno Aguirre University of Rio Cuarto and CONICET, Marcelo F. Frias Dept. of Software Engineering Instituto Tecnológico de Buenos Aires
12:12
18m
Talk
Predicting Configuration Performance in Multiple Environments with Sequential Meta-Learning
Research Papers
Jingzhi Gong Loughborough University, Tao Chen University of Birmingham
Pre-print