As software projects progress, quality of code assumes paramount importance as it affects reliability, maintainability and security of software. For this reason, static analysis tools are used in developer workflows to flag code quality issues. However, developers need to spend extra efforts to revise their code to improve code quality based on the tool findings. In this work, we investigate the use of (instruction-following) large language models (LLMs) to assist developers in revising code to resolve code quality issues.
We present a tool, CORE (short for COde REvisions), architected using a pair of LLMs organized as a duo comprised of a proposer and a ranker. Providers of static analysis tools recommend ways to mitigate the tool warnings and developers follow them to revise their code. The \emph{proposer LLM} of CORE takes the same set of recommendations and applies them to generate candidate code revisions. The candidates which pass the static quality checks are retained. However, the LLM may introduce subtle, unintended functionality changes which may go un-detected by the static analysis. The \emph{ranker LLM} evaluates the changes made by the proposer using a rubric that closely follows the acceptance criteria that a developer would enforce. CORE uses the scores assigned by the ranker LLM to rank the candidate revisions before presenting them to the developer.
We conduct a variety of experiments on two public benchmarks to show the ability of CORE: 1) to generate code revisions acceptable to both static analysis tools and human reviewers (the latter evaluated with human study on a subset of the Python benchmark), 2) to reduce human review efforts by detecting and eliminating revisions with unintended changes, 3) to readily work across multiple languages (Python and Java), static analysis tools (CodeQL and SonarQube) and quality checks (52 and 10 checks, respectively), and 4) to achieve fix rate comparable to a rule-based automated program repair tool but with much smaller engineering efforts (on the Java benchmark).
CORE could revise 59.2% Python files (across 52 quality checks) so that they pass scrutiny by both a tool and a human reviewer. The ranker LLM is able to reduce false positives by 25.8% in these cases. CORE produced revisions that passed the static analysis tool in 76.8% Java files (across 10 quality checks) comparable to 78.3% of a specialized program repair tool, with significantly much less engineering efforts.
Wed 17 JulDisplayed time zone: Brasilia, Distrito Federal, Brazil change
16:00 - 18:00 | Program Repair and SynthesisDemonstrations / Research Papers / Ideas, Visions and Reflections at Mandacaru Chair(s): Fernanda Madeiral Vrije Universiteit Amsterdam | ||
16:00 18mTalk | A Deep Dive into Large Language Models for Automated Bug Localization and Repair Research Papers Soneya Binta Hossain University of Virginia, Nan Jiang Purdue University, Qiang Zhou Amazon Web Services, Xiaopeng LI Amazon Web Services, Wen-Hao Chiang Amazon Web Services, Yingjun Lyu Amazon Web Services, Hoan Nguyen Amazon Web Services, Omer Tripp Amazon Web Services DOI | ||
16:18 18mTalk | CORE: Resolving Code Quality Issues Using LLMs Research Papers Nalin Wadhwa Microsoft Research, India, Jui Pradhan Microsoft Research, India, Atharv Sonwane Microsoft Research, India, Surya Prakash Sahu Microsoft Research, India, Nagarajan Natarajan Microsoft Research India, Aditya Kanade Microsoft Research, India, Suresh Parthasarathy Microsoft Research, India, Sriram Rajamani Microsoft Research Indua | ||
16:36 18mTalk | Towards Effective Multi-Hunk Bug Repair: Detecting, Creating, Evaluating, and Understanding Indivisible Bugs Research Papers Qi Xin Wuhan University, Haojun Wu Wuhan University, Jinran Tang Wuhan University, Xinyu Liu Wuhan University, Steven P. Reiss Brown University, Jifeng Xuan Wuhan University | ||
16:54 18mTalk | ProveNFix: Temporal Property guided Program Repair Research Papers Yahui Song National University of Singapore, Xiang Gao Beihang University, Wenhua Li National University of Singapore, Wei-Ngan Chin National University of Singapore, Abhik Roychoudhury National University of Singapore DOI Pre-print | ||
17:12 18mTalk | Towards AI-Assisted Synthesis of Verified Dafny Methods Research Papers Md Rakib Hossain Misu University of California Irvine, Crista Lopes University of California Irvine, Iris Ma University of California Irvine, James Noble Independent. Wellington, NZ DOI Pre-print | ||
17:30 9mTalk | Execution-free program repair Ideas, Visions and Reflections Bertrand Meyer Constructor Institute Schaffhausen, Li Huang Constructor Institute Schaffhausen, Ilgiz Mustafin Constructor Institute, Manuel Oriol Constructor Institute Schaffhausen | ||
17:39 9mTalk | ConDefects: A Complementary Dataset to Address the Data Leakage Concern for LLM-based Fault Localization and Program Repair Demonstrations Yonghao Wu Beijing University of Chemical Technology, Zheng Li Beijing University of Chemical Technology, Jie M. Zhang King's College London, Yong Liu Beijing University of Chemical Technology | ||
17:48 9mTalk | ASAC: A Benchmark for Algorithm Synthesis Demonstrations Zhao Zhang Peking University, Yican Sun Peking University, Ruyi Ji Peking University, Siyuan Li Peking University, Xuanyu Peng University of California, San Diego, Zhechong Huang Peking University, Sizhe Li Peking University, Tianran Zhu Peking University, Yingfei Xiong Peking University Pre-print Media Attached |