This program is tentative and subject to change.

Wed 17 Jul 2024 12:12 - 12:30 at Baobá 6 - Code Search and Completion

Code completion aims to enhance programming productivity by predicting potential code based on the current programming context. Recently, pretrained language models (LMs) have become prominent in this field. Various approaches have been proposed to fine-tune LMs using supervised fine-tuning (SFT) techniques for code completion. However, the inherent exposure bias of these models can cause errors to accumulate early in the sequence completion, leading to even more errors in subsequent completions. To address this problem, deep reinforcement learning (DRL) is an alternative technique for fine-tuning LMs for code completion, which can improve the generalization capabilities and overall performance. Nevertheless, integrating DRL-based strategies into code completion faces two major challenges: 1) The dynamic nature of the code context requires the completion model to quickly adapt to changes, which poses difficulties for conventional DRL strategies that focus on delayed rewarding of the final code state. 2) It is difficult to evaluate the correctness of partial code, thus the reward redistribution-based strategies cannot be adapted to code completion. To tackle these challenges, we propose IRCoCo, a code completion-specific DRL-based fine-tuning framework. This framework is designed to provide immediate rewards as feedback for detecting dynamic context changes arising from continuous edits during code completion. With the aid of immediate feedback, the fine-tuned LM can gain a more precise understanding of the current context, thereby enabling effective adjustment of the LM and optimizing code completion in a more refined manner. Experimental results demonstrate that fine-tuning pretrained LMs with IRCoCo leads to significant improvements in the code completion task, outperforming both SFT-based and other DRL-based baselines

This program is tentative and subject to change.

Wed 17 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Code Search and CompletionResearch Papers / Industry Papers at Baobá 6
11:00
18m
Talk
ClarifyGPT: A Framework for Enhancing LLM-based Code Generation via Requirements Clarification
Research Papers
Fangwen Mu Institute of Software, Chinese Academy of Sciences, Lin Shi Beihang University, Song Wang York University, Zhuohao Yu Institute of Software, Chinese Academy of Sciences, Binquan Zhang Beihang University, ChenXue Wang Institute of Software, Chinese Academy of Sciences, Shichao LIu Software IDE innovation Lab, Huawei Central Software Institute, Qing Wang Institute of Software, Chinese Academy of Sciences
11:18
18m
Talk
CodePlan: Repository-level Coding using LLMs and Planning
Research Papers
Ramakrishna Bairi Microsoft Research, India, Atharv Sonwane Microsoft Research, India, Aditya Kanade Microsoft Research, India, Vageesh D C Microsoft Research, India, Arun Iyer Microsoft Research, India, Suresh Parthasarathy Microsoft Research, India, Sriram Rajamani Microsoft Research Indua, B. Ashok Microsoft Research. India, Shashank Shet Microsoft Research. India
11:36
18m
Talk
An Empirical Study of Code Search in Intelligent Coding Assistant: Perceptions, Expectations, and Directions
Industry Papers
Chao Liu Chongqing University, Xindong Zhang Alibaba Cloud Computing Co. Ltd., Hongyu Zhang Chongqing University, Zhiyuan Wan Zhejiang University, Zhan Huang Chongqing University, Meng Yan Chongqing University
11:54
18m
Talk
DeciX: Explain Deep Learning Based Code Generation Applications
Research Papers
Simin Chen University of Texas at Dallas, Zexin Li University of California, Riverside, Wei Yang University of Texas at Dallas, Cong Liu University of California, Riverside
12:12
18m
Talk
IRCoCo: Immediate Rewards-Guided Deep Reinforcement Learning for Code Completion
Research Papers
Bolun Li Shandong Normal University, Zhihong Sun Shandong Normal University, Tao Huang Shandong Normal University, Hongyu Zhang Chongqing University, Yao Wan Huazhong University of Science and Technology, Chen Lyu Shandong Normal University, Ge Li Peking University, Zhi Jin Peking University