Wed 17 Jul 2024 12:12 - 12:30 at Pitomba - Code Search and Completion Chair(s): Akond Rahman

Code completion aims to enhance programming productivity by predicting potential code based on the current programming context. Recently, pretrained language models (LMs) have become prominent in this field. Various approaches have been proposed to fine-tune LMs using supervised fine-tuning (SFT) techniques for code completion. However, the inherent exposure bias of these models can cause errors to accumulate early in the sequence completion, leading to even more errors in subsequent completions. To address this problem, deep reinforcement learning (DRL) is an alternative technique for fine-tuning LMs for code completion, which can improve the generalization capabilities and overall performance. Nevertheless, integrating DRL-based strategies into code completion faces two major challenges: 1) The dynamic nature of the code context requires the completion model to quickly adapt to changes, which poses difficulties for conventional DRL strategies that focus on delayed rewarding of the final code state. 2) It is difficult to evaluate the correctness of partial code, thus the reward redistribution-based strategies cannot be adapted to code completion. To tackle these challenges, we propose IRCoCo, a code completion-specific DRL-based fine-tuning framework. This framework is designed to provide immediate rewards as feedback for detecting dynamic context changes arising from continuous edits during code completion. With the aid of immediate feedback, the fine-tuned LM can gain a more precise understanding of the current context, thereby enabling effective adjustment of the LM and optimizing code completion in a more refined manner. Experimental results demonstrate that fine-tuning pretrained LMs with IRCoCo leads to significant improvements in the code completion task, outperforming both SFT-based and other DRL-based baselines

Wed 17 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Code Search and CompletionIndustry Papers / Research Papers at Pitomba
Chair(s): Akond Rahman Auburn University
11:00
18m
Talk
Leveraging Large Language Models for the Auto-remediation of Microservice Applications - An Experimental Study
Industry Papers
Komal Sarda York University, Zakeya Namrud York University, Marin Litoiu York University, Canada, Larisa Shwartz IBM T.J. Watson Research, Ian Watts IBM Canada
11:18
18m
Talk
CodePlan: Repository-level Coding using LLMs and Planning
Research Papers
Ramakrishna Bairi Microsoft Research, India, Atharv Sonwane Microsoft Research, India, Aditya Kanade Microsoft Research, India, Vageesh D C Microsoft Research, India, Arun Iyer Microsoft Research, India, Suresh Parthasarathy Microsoft Research, India, Sriram Rajamani Microsoft Research Indua, B. Ashok Microsoft Research. India, Shashank Shet Microsoft Research. India
11:36
18m
Talk
An Empirical Study of Code Search in Intelligent Coding Assistant: Perceptions, Expectations, and Directions
Industry Papers
Chao Liu Chongqing University, Xindong Zhang Alibaba Cloud Computing Co. Ltd., Hongyu Zhang Chongqing University, Zhiyuan Wan Zhejiang University, Zhan Huang Chongqing University, Meng Yan Chongqing University
11:54
18m
Talk
DeciX: Explain Deep Learning Based Code Generation Applications
Research Papers
Simin Chen University of Texas at Dallas, Zexin Li University of California, Riverside, Wei Yang University of Texas at Dallas, Cong Liu University of California, Riverside
12:12
18m
Talk
IRCoCo: Immediate Rewards-Guided Deep Reinforcement Learning for Code Completion
Research Papers
Bolun Li Shandong Normal University, Zhihong Sun Shandong Normal University, Tao Huang Shandong Normal University, Hongyu Zhang Chongqing University, Yao Wan Huazhong University of Science and Technology, Chen Lyu Shandong Normal University, Ge Li Peking University, Zhi Jin Peking University