Wed 17 Jul 2024 17:12 - 17:30 at Mandacaru - Program Repair and Synthesis Chair(s): Fernanda Madeiral

Large language models show great promise in many domains, including programming. A promise is easy to make but hard to keep, and language models often fail to keep their promises, generating erroneous code. A promising avenue to keep models honest is to incorporate formal verification: generating programs’ specifications as well as code so that the code can be proved correct with respect to the specifications. Unfortunately, existing large language models show a severe lack of proficiency in verified programming.

In this paper, we demonstrate how to improve two pretrained models’ proficiency in the Dafny verification-aware language. Using 178 problems from the MBPP dataset, we prompt two contemporary models (GPT-4 and PaLM-2) to synthesize Dafny methods. We use three different types of prompts: a direct Contextless prompt; a Signature prompt that includes a method signature and test cases, and a Chain of Thought (CoT) prompt that decomposes the problem into steps and includes retrieval augmentation generated example problems and solutions. Our results show that GPT-4 performs better than PaLM-2 on these tasks and that both models perform best with the retrieval augmentation generated CoT prompt. GPT-4 was able to generate verified, human-evaluated, Dafny methods for 58% of the problems, however, GPT-4 managed only 19% of the problems with the Contextless prompt, and even fewer (10%) for the Signature prompt. We are thus able to contribute 153 verified Dafny solutions to MBPP problems, 50 that we wrote manually, and 103 synthesized by GPT-4.

Our results demonstrate that the benefits of formal program verification are now within reach of code generating large language models. Likewise, program verification systems can benefit from large language models, whether to synthesize code wholesale, to generate specifications, or to act as a “programmer’s verification apprentice”, to construct annotations such as loop invariants which are hard for programmers to write or verification tools to find. Finally, we expect that the approach we have pioneered here — generating candidate solutions that are subsequently formally checked for correctness — should transfer to other domains (e.g., legal arguments, transport signaling, structural engineering) where solutions must be correct, where that correctness must be demonstrated, explained and understood by designers and end-users.

Wed 17 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

16:00 - 18:00
Program Repair and SynthesisDemonstrations / Research Papers / Ideas, Visions and Reflections at Mandacaru
Chair(s): Fernanda Madeiral Vrije Universiteit Amsterdam
16:00
18m
Talk
A Deep Dive into Large Language Models for Automated Bug Localization and Repair
Research Papers
Soneya Binta Hossain University of Virginia, Nan Jiang Purdue University, Qiang Zhou Amazon Web Services, Xiaopeng LI Amazon Web Services, Wen-Hao Chiang Amazon Web Services, Yingjun Lyu Amazon Web Services, Hoan Nguyen Amazon Web Services, Omer Tripp Amazon Web Services
DOI
16:18
18m
Talk
CORE: Resolving Code Quality Issues Using LLMs
Research Papers
Nalin Wadhwa Microsoft Research, India, Jui Pradhan Microsoft Research, India, Atharv Sonwane Microsoft Research, India, Surya Prakash Sahu Microsoft Research, India, Nagarajan Natarajan Microsoft Research India, Aditya Kanade Microsoft Research, India, Suresh Parthasarathy Microsoft Research, India, Sriram Rajamani Microsoft Research Indua
16:36
18m
Talk
Towards Effective Multi-Hunk Bug Repair: Detecting, Creating, Evaluating, and Understanding Indivisible Bugs
Research Papers
Qi Xin Wuhan University, Haojun Wu Wuhan University, Jinran Tang Wuhan University, Xinyu Liu Wuhan University, Steven P. Reiss Brown University, Jifeng Xuan Wuhan University
16:54
18m
Talk
ProveNFix: Temporal Property guided Program RepairDistinguished Paper Award
Research Papers
Yahui Song National University of Singapore, Xiang Gao Beihang University, Wenhua Li National University of Singapore, Wei-Ngan Chin National University of Singapore, Abhik Roychoudhury National University of Singapore
DOI Pre-print
17:12
18m
Talk
Towards AI-Assisted Synthesis of Verified Dafny Methods
Research Papers
Md Rakib Hossain Misu University of California Irvine, Crista Lopes University of California Irvine, Iris Ma University of California Irvine, James Noble Independent. Wellington, NZ
DOI Pre-print
17:30
9m
Talk
Execution-free program repair
Ideas, Visions and Reflections
Bertrand Meyer Constructor Institute Schaffhausen, Li Huang Constructor Institute Schaffhausen, Ilgiz Mustafin Constructor Institute, Manuel Oriol Constructor Institute Schaffhausen
17:39
9m
Talk
ConDefects: A Complementary Dataset to Address the Data Leakage Concern for LLM-based Fault Localization and Program Repair
Demonstrations
Yonghao Wu Beijing University of Chemical Technology, Zheng Li Beijing University of Chemical Technology, Jie M. Zhang King's College London, Yong Liu Beijing University of Chemical Technology
17:48
9m
Talk
ASAC: A Benchmark for Algorithm Synthesis
Demonstrations
Zhao Zhang Peking University, Yican Sun Peking University, Ruyi Ji Peking University, Siyuan Li Peking University, Xuanyu Peng University of California, San Diego, Zhechong Huang Peking University, Sizhe Li Peking University, Tianran Zhu Peking University, Yingfei Xiong Peking University
Pre-print Media Attached