Wed 17 Jul 2024 14:36 - 14:54 at Pitanga - Testing 1 Chair(s): Xi Zheng

Random testing approaches work by generating inputs at random, or by selecting inputs randomly from some pre-defined operational profile. One long-standing question that arises in this and other testing contexts is as follows: When can we stop testing? At what point can we be certain that executing further tests in this manner will not explore previously untested (and potentially buggy) software behaviors? This is analogous to the question in Machine Learning, of how many training examples are required in order to infer an accurate model. In this paper we show how probabilistic approaches to answer this question in Machine Learning (arising from Computational Learning Theory) can be applied in our testing context, to provide an upper-bound on the number of tests required to achieve a given level of adequacy. We validate this bound on a large set of Java units, and an automated driving system.

Wed 17 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

14:00 - 15:30
Testing 1Research Papers / Journal First at Pitanga
Chair(s): Xi Zheng Macquarie University
14:00
18m
Talk
Test Input Prioritization for 3D Point Clouds
Journal First
Yinghua LI University of Luxembourg, Xueqi Dang University of Luxembourg, Lei Ma The University of Tokyo & University of Alberta, Jacques Klein University of Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg, Tegawendé F. Bissyandé University of Luxembourg
14:18
18m
Talk
Evaluating and Improving ChatGPT for Unit Test Generation
Research Papers
Zhiqiang Yuan Fudan University, Mingwei Liu Fudan University, Shiji Ding Fudan University, Kaixin Wang Fudan University, Yixuan Chen Yale University, Xin Peng Fudan University, Yiling Lou Fudan University
14:36
18m
Talk
Bounding Random Test Set Size with Computational Learning Theory
Research Papers
Neil Walkinshaw University of Sheffield, Michael Foster The University of Sheffield, José Miguel Rojas The University of Sheffield, Robert Hierons The University of Sheffield
Pre-print
14:54
18m
Talk
COSTELLO: Contrastive Testing for Embedding-based Large Language Model as a Service Embeddings
Research Papers
Weipeng Jiang Xi'an Jiaotong University, Juan Zhai University of Massachusetts, Amherst, Shiqing Ma University of Massachusetts, Amherst, Xiaoyu Zhang Xi'an Jiaotong University, Chao Shen Xi'an Jiaotong University
15:12
18m
Talk
FeatMaker: Automated Feature Engineering for Search Strategy of Symbolic Execution
Research Papers
Jaehan Yoon Sungkyunkwan University, Sooyoung Cha Sungkyunkwan University