Wed 17 Jul 2024 11:54 - 12:12 at Pitomba - Code Search and Completion Chair(s): Akond Rahman

Deep learning-based code generation (DL-CG) applications have shown great potential for assisting developers in programming with human-competitive accuracy. However, the lack of transparency in such applications due to the uninterpretable nature of deep learning models makes the automatically generated programs untrustworthy. In this paper, we develop \tool, a first explanation method dedicated to \abbr applications. \tool is motivated by observing two unique properties of DL-CG applications: output-to-output dependencies and irrelevant value and semantic space. These properties violate the fundamental assumptions made in existing explainable DL techniques and thus cause applying existing techniques to DL-CG applications rather pessimistic and even incorrect.
\tool addresses these two limitations by constructing a causal inference dependency graph, containing a novel method leveraging causal inference that can accurately quantify the contribution of each dependency edge in the graph to the end prediction result. Proved by extensive experiments assessing popular, widely-used DL-CG applications and several baseline methods, \tool is able to achieve significantly better performance compared to state-of-the-art in terms of several critical performance metrics, including correctness, succinctness, stability, and overhead. Furthermore, \tool can be applied to practical scenarios since it does not require any knowledge of the DL-CG model under explanation. We have also conducted case studies that demonstrate the applicability of \tool in practice.

Wed 17 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Code Search and CompletionIndustry Papers / Research Papers at Pitomba
Chair(s): Akond Rahman Auburn University
11:00
18m
Talk
Leveraging Large Language Models for the Auto-remediation of Microservice Applications - An Experimental Study
Industry Papers
Komal Sarda York University, Zakeya Namrud York University, Marin Litoiu York University, Canada, Larisa Shwartz IBM T.J. Watson Research, Ian Watts IBM Canada
11:18
18m
Talk
CodePlan: Repository-level Coding using LLMs and Planning
Research Papers
Ramakrishna Bairi Microsoft Research, India, Atharv Sonwane Microsoft Research, India, Aditya Kanade Microsoft Research, India, Vageesh D C Microsoft Research, India, Arun Iyer Microsoft Research, India, Suresh Parthasarathy Microsoft Research, India, Sriram Rajamani Microsoft Research Indua, B. Ashok Microsoft Research. India, Shashank Shet Microsoft Research. India
11:36
18m
Talk
An Empirical Study of Code Search in Intelligent Coding Assistant: Perceptions, Expectations, and Directions
Industry Papers
Chao Liu Chongqing University, Xindong Zhang Alibaba Cloud Computing Co. Ltd., Hongyu Zhang Chongqing University, Zhiyuan Wan Zhejiang University, Zhan Huang Chongqing University, Meng Yan Chongqing University
11:54
18m
Talk
DeciX: Explain Deep Learning Based Code Generation Applications
Research Papers
Simin Chen University of Texas at Dallas, Zexin Li University of California, Riverside, Wei Yang University of Texas at Dallas, Cong Liu University of California, Riverside
12:12
18m
Talk
IRCoCo: Immediate Rewards-Guided Deep Reinforcement Learning for Code Completion
Research Papers
Bolun Li Shandong Normal University, Zhihong Sun Shandong Normal University, Tao Huang Shandong Normal University, Hongyu Zhang Chongqing University, Yao Wan Huazhong University of Science and Technology, Chen Lyu Shandong Normal University, Ge Li Peking University, Zhi Jin Peking University