Wed 17 Jul 2024 14:18 - 14:36 at Pitanga - Testing 1 Chair(s): Xi Zheng

Unit testing plays an essential role in detecting bugs in functionally-discrete program units (e.g., methods). Manually writing high-quality unit tests is time-consuming and laborious. Although the traditional techniques are able to generate tests with reasonable coverage, they are shown to exhibit low readability and still cannot be directly adopted by developers in practice. Recent work has shown the large potential of large language models (LLMs) in unit test generation. By being pre-trained on a massive developer-written code corpus, the models are capable of generating more human-like and meaningful test code. ChatGPT, the latest LLM that further incorporates instruction tuning and reinforcement learning, has exhibited outstanding performance in various domains. To date, it still remains unclear how effective ChatGPT is in unit test generation.

In this work, we perform the first empirical study to evaluate ChatGPT ‘s capability of unit test generation. In particular, we conduct both a quantitative analysis and a user study to systematically investigate the quality of its generated tests in terms of correctness, sufficiency, readability, and usability. We find that the tests generated by ChatGPT still suffer from correctness issues, including diverse compilation errors and execution failures (mostly caused by incorrect assertions); but the passing tests generated by ChatGPT almost resemble manually-written tests by achieving comparable coverage, readability, and even sometimes developers’ preference. Our findings indicate that generating unit tests with ChatGPT could be very promising if the correctness of its generated tests could be further improved.

Inspired by our findings above, we further propose ChatTester, a novel ChatGPT-based unit test generation approach, which leverages ChatGPT itself to improve the quality of its generated tests. ChatTester incorporates an initial test generator and an iterative test refiner. Our evaluation demonstrates the effectiveness of ChatTester by generating 34.3% more compilable tests and 18.7% more tests with correct assertions than the default ChatGPT.

Wed 17 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

14:00 - 15:30
Testing 1Research Papers / Journal First at Pitanga
Chair(s): Xi Zheng Macquarie University
14:00
18m
Talk
Test Input Prioritization for 3D Point Clouds
Journal First
Yinghua LI University of Luxembourg, Xueqi Dang University of Luxembourg, Lei Ma The University of Tokyo & University of Alberta, Jacques Klein University of Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg, Tegawendé F. Bissyandé University of Luxembourg
14:18
18m
Talk
Evaluating and Improving ChatGPT for Unit Test Generation
Research Papers
Zhiqiang Yuan Fudan University, Mingwei Liu Fudan University, Shiji Ding Fudan University, Kaixin Wang Fudan University, Yixuan Chen Yale University, Xin Peng Fudan University, Yiling Lou Fudan University
14:36
18m
Talk
Bounding Random Test Set Size with Computational Learning Theory
Research Papers
Neil Walkinshaw University of Sheffield, Michael Foster The University of Sheffield, José Miguel Rojas The University of Sheffield, Robert Hierons The University of Sheffield
Pre-print
14:54
18m
Talk
COSTELLO: Contrastive Testing for Embedding-based Large Language Model as a Service Embeddings
Research Papers
Weipeng Jiang Xi'an Jiaotong University, Juan Zhai University of Massachusetts, Amherst, Shiqing Ma University of Massachusetts, Amherst, Xiaoyu Zhang Xi'an Jiaotong University, Chao Shen Xi'an Jiaotong University
15:12
18m
Talk
FeatMaker: Automated Feature Engineering for Search Strategy of Symbolic Execution
Research Papers
Jaehan Yoon Sungkyunkwan University, Sooyoung Cha Sungkyunkwan University