The growing complexity of cloud based software systems has resulted in incident management becoming an integral part of the software development lifecycle. Root cause analysis (RCA), a critical part of the incident management process, is a demanding task for on-call engineers, requiring deep domain knowledge and extensive experience with a team’s specific services. Automation of RCA can result in significant savings of time, and ease the burden of incident management on on-call engineers. Recently, researchers have utilized Large Language Models (LLMs) to perform RCA, and have demonstrated promising results. However, these approaches are not able to dynamically collect additional diagnostic information such as incident related logs, metrics or databases, severely restricting their ability to diagnose root causes. In this work, we explore the use of LLM based agents for RCA to address this limitation. We present a thorough empirical evaluation of a ReAct agent equipped with retrieval tools, on an out-of-distribution dataset of production incidents collected at a large IT corporation. In addition, we qualitatively analyze model predictions to characterize success and failure modes. Results show that ReAct performs competitively with strong retrieval and reasoning baselines, but with highly increased factual accuracy. We then extend this evaluation by incorporating discussions associated with incident reports as additional inputs for the models, which surprisingly does not yield significant performance improvements regardless of the considered model. Lastly, we conduct a case study with another team at CompanyX to equip the ReAct agent with tools that give it access to external diagnostic services that are used by the team for manual RCA. Our results show how agents can overcome the limitations of prior work, and practical considerations for implementing such a system in practice.
Thu 18 JulDisplayed time zone: Brasilia, Distrito Federal, Brazil change
16:00 - 18:00 | SE4AI 2Research Papers / Industry Papers / Demonstrations / Journal First at Mandacaru Chair(s): Wei Yang University of Texas at Dallas | ||
16:00 18mTalk | Natural Is The Best: Model-Agnostic Code Simplification for Pre-trained Large Language Models Research Papers Yan Wang Central University of Finance and Economics, Xiaoning Li Central University of Finance and Economics, Tien N. Nguyen University of Texas at Dallas, Shaohua Wang Central University of Finance and Economics, Chao Ni School of Software Technology, Zhejiang University, Ling Ding Central University of Finance and Economics Pre-print Media Attached File Attached | ||
16:18 18mTalk | On Reducing Undesirable Behavior in Deep-Reinforcement-Learning-Based Software Research Papers | ||
16:36 9mTalk | Decide: Knowledge-based Version Incompatibility Detection in Deep Learning Stacks Demonstrations Zihan Zhou The University of Hong Kong, Zhongkai Zhao National University of Singapore, Bonan Kou Purdue University, Tianyi Zhang Purdue University DOI Pre-print Media Attached | ||
16:45 18mTalk | Test input prioritization for Machine Learning Classifiers Journal First Xueqi Dang University of Luxembourg, Yinghua LI University of Luxembourg, Mike Papadakis University of Luxembourg, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg | ||
17:03 18mTalk | How Far Are We with Automated Machine Learning? Characterization and Challenges of AutoML Toolkits Journal First | ||
17:21 18mTalk | Automated Root Causing of Cloud Incidents using In-Context Learning with GPT-4 Industry Papers Xuchao Zhang Microsoft, Supriyo Ghosh Microsoft, Chetan Bansal Microsoft Research, Rujia Wang Microsoft, Minghua Ma Microsoft Research, Yu Kang Microsoft Research, Saravan Rajmohan Microsoft | ||
17:39 18mTalk | Exploring LLM-based Agents for Root Cause Analysis Industry Papers Devjeet Roy Washington State University, Xuchao Zhang Microsoft, Rashi Bhave Microsoft Research, Chetan Bansal Microsoft Research, Pedro Las-Casas Microsoft, Rodrigo Fonseca Microsoft Research, Saravan Rajmohan Microsoft |