Automated Unit Test Improvement using Large Language Models at Meta
This paper describes Meta’s TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination. We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms. In an evaluation on Reels and Stories products for Instagram, 75% of TestGen-LLM’s test cases built correctly, 57% passed reliably, and 25% increased coverage. During Meta’s Instagram and Facebook test-a-thons, it improved 11.5% of all classes to which it was applied, with 73% of its recommendations being accepted for production deployment by Meta software engineers. We believe this is the first report on industrial scale deployment of LLM-generated code backed by such assurances of code improvement.
Wed 17 JulDisplayed time zone: Brasilia, Distrito Federal, Brazil change
16:00 - 18:00 | AI4SE 2Industry Papers / Research Papers at Pitomba Chair(s): Jingyue Li Norwegian University of Science and Technology (NTNU) | ||
16:00 18mTalk | MonitorAssistant: Simplifying Cloud Service Monitoring via Large Language Models Industry Papers Zhaoyang Yu Tsinghua University, Minghua Ma Microsoft Research, Chaoyun Zhang Microsoft, Si Qin Microsoft Research, Yu Kang Microsoft Research, Chetan Bansal Microsoft Research, Saravan Rajmohan Microsoft, Yingnong Dang Microsoft Azure, Changhua Pei Computer Network Information Center at Chinese Academy of Sciences, Dan Pei Tsinghua University, Qingwei Lin Microsoft, Dongmei Zhang Microsoft Research | ||
16:18 18mTalk | Code-Aware Prompting: A study of Coverage guided Test Generation in Regression Setting using LLM Research Papers Gabriel Ryan Columbia University, Siddhartha Jain AWS AI Labs, Mingyue Shang AWS AI Labs, Shiqi Wang AWS AI Labs, Xiaofei Ma AWS AI Labs, Murali Krishna Ramanathan AWS AI Labs, Baishakhi Ray Columbia University, New York; AWS AI Lab | ||
16:36 18mTalk | A Machine Learning-Based Error Mitigation Approach for Reliable Software Development on IBM’s Quantum Computers Industry Papers Asmar Muqeet Simula Research Laboratory and University of Oslo, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University, Tao Yue Beihang University, Paolo Arcaini National Institute of Informatics
| ||
16:54 18mTalk | Multi-line AI-assisted Code Authoring Industry Papers Omer Dunay Meta Platforms, Inc., Daniel Cheng Meta Platforms Inc., Adam Tait Meta Platforms, Inc., Parth Thakkar Meta Platforms, Inc., Peter C Rigby Meta / Concordia University, Andy Chiu Meta Platforms, Inc., Imad Ahmad Meta Platforms, Inc., Arun Ganesan Meta Platforms, Inc., Chandra Sekhar Maddila Meta Platforms, Inc., Vijayaraghavan Murali Meta Platforms Inc., Ali Tayyebi Meta Platforms Inc., Nachiappan Nagappan Meta Platforms, Inc. | ||
17:12 18mTalk | Combating Missed Recalls in E-commerce Search: a CoT-prompting Testing Approach Industry Papers Shengnan Wu School of Computer Science, Fudan University, Yongxiang Hu Fudan University, Yingchuan Wang School of Computer Science, Fudan University, Jiazhen Gu The Chinese University of Hong Kong, Jin Meng Meituan Inc., Liujie Fan Meituan Inc., Zhongshi Luan Meituan Inc., Xin Wang Fudan University, Yangfan Zhou Fudan University Pre-print | ||
17:30 18mTalk | Automated Unit Test Improvement using Large Language Models at Meta Industry Papers Mark Harman Meta Platforms, Inc. and UCL, Jubin Chheda Meta platforms, Anastasia Finogenova Meta platforms, Inna Harper Meta, Alexandru Marginean Meta platforms, Shubho Sengupta Meta platforms, Eddy Wang Meta platforms, Nadia Alshahwan Meta Platforms, Beliz Gokkaya Meta Platforms |