Rigorous Assessment of Model Inference Accuracy using Language Cardinality
Models such as finite state automata are widely used to abstract the behavior of software systems by capturing the sequences of events observable during their execution. Nevertheless, models rarely exist in practice and, when they do, get easily outdated; moreover, manually building and maintaining models is costly and error-prone. As a result, a variety of model inference methods that automatically construct models from execution traces have been proposed to address these issues.
However, performing a systematic and reliable accuracy assessment of inferred models remains an open problem. Even when a reference model is given, most existing model accuracy assessment methods may return misleading and biased results. This is mainly due to their reliance on statistical estimators over a finite number of randomly generated traces, introducing avoidable uncertainty about the estimation and being sensitive to the parameters of the random trace generative process.
This paper addresses this problem by developing a systematic approach based on analytic combinatorics that minimizes bias and uncertainty in model accuracy assessment by replacing statistical estimation with deterministic accuracy measures. We experimentally demonstrate the consistency and applicability of our approach by assessing the accuracy of models inferred by state-of-the-art inference tools against reference models from established specification mining benchmarks.
Wed 17 JulDisplayed time zone: Brasilia, Distrito Federal, Brazil change
11:00 - 12:30 | Formal VerificationDemonstrations / Journal First / Research Papers / Industry Papers at Pitanga Chair(s): Yunja Choi Kyungpook National University | ||
11:00 18mTalk | A Transferability Study of Interpolation-Based Hardware Model Checking to Software Verification Research Papers DOI Media Attached | ||
11:18 9mTalk | CoqPyt: Proof Navigation in Python in the Era of LLMs Demonstrations Pedro Carrott Imperial College London, Nuno Saavedra INESC-ID and IST, University of Lisbon, Kyle Thompson University of California, San Diego, Sorin Lerner University of California at San Diego, João F. Ferreira INESC-ID and IST, University of Lisbon, Emily First University of California, San Diego DOI Pre-print | ||
11:27 9mTalk | How We Built Cedar: A Verification-Guided Approach Industry Papers Craig Disselkoen Amazon Web Services, Aaron Eline Amazon, Shaobo He Amazon Web Services, Kyle Headley Unaffiliated, MIchael Hicks Amazon, Kesha Hietala Amazon Web Services, John Kastner Amazon Web Services, Anwar Mamat University of Maryland, Matt McCutchen , Neha Rungta Amazon Web Services, Bhakti Shah University of St. Andrews, Emina Torlak Amazon Web Services, USA, Andrew Wells Amazon Web Services | ||
11:36 18mTalk | Mission Specification Patterns for Mobile Robots: Providing Support for Quantitative Properties Journal First Claudio Menghi University of Bergamo; McMaster University, Christos Tsigkanos University of Bern, Switzerland, Mehrnoosh Askarpour McMaster University, Patrizio Pelliccione Gran Sasso Science Institute, L'Aquila, Italy, Gricel Vázquez University of York, UK, Radu Calinescu University of York, UK, Sergio García Volvo Cars Corporation, Sweden | ||
11:54 18mTalk | Rigorous Assessment of Model Inference Accuracy using Language Cardinality Journal First Donato Clun Imperial College London, Donghwan Shin University of Sheffield, Antonio Filieri AWS and Imperial College London, Domenico Bianculli University of Luxembourg | ||
12:12 18mTalk | Simulation-based Testing of Simulink Models with Test Sequence and Test Assessment Blocks Journal First Federico Formica McMaster University, Tony Fan McMaster University, Akshay Rajhans Mathworks, Vera Pantelic McMaster University, Mark Lawford McMaster University, Claudio Menghi University of Bergamo; McMaster University |