Thu 18 Jul 2024 11:36 - 11:54 at Mandacaru - Human Aspects 2 Chair(s): Bianca Trinkenreich

Modern Code Review (MCR) is an integral part of the software development process where developers improve product quality through collaborative discussions. Unfortunately, these discussions can sometimes become heated by the presence of inappropriate behaviors, often referred to as incivility. The negative behaviors encompass personal attacks, insults, disrespectful comments, and derogatory conduct. While researchers have extensively explored such incivility in various public domains, our understanding of its causes, consequences, and courses of action remains limited within the professional context of software development, specifically within code review discussions. To bridge this gap, our study draws upon the experience of 171 professional software developers representing diverse development practices across different geographical regions. Our findings reveal that more than half of these developers (i.e., 56.72%) have encountered instances of workplace incivility, and a substantial portion of that group (83.70%) reported experiencing such incidents at least once a month. We also identified various causes, positive and negative consequences, and potential courses of action for uncivil communication. Moreover, to address the negative aspects of incivility, we propose a model for promoting civility that detects uncivil comments during communication and provides alternative civil suggestions while preserving the original comments’ semantics, enabling developers to engage in respectful and constructive discussions. An analysis of four sentiment analyzers using 2K uncivil review comments, followed by a manual evaluation, stated that the generated civil alternatives significantly enhanced the comment sentiments positively. Moreover, a survey involving 36 developers who used our civility model reported its effectiveness in enhancing online development interactions, fostering better relationships, increasing contributor involvement, and expediting development processes. Our research is a pioneer in generating civil alternatives for uncivil discussions in software development, opening new avenues for research in collaboration and communication within the software engineering context.

Thu 18 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Human Aspects 2Research Papers at Mandacaru
Chair(s): Bianca Trinkenreich Colorado State University
11:00
18m
Talk
Can GPT-4 Replicate Empirical Software Engineering Research?
Research Papers
Jenny T. Liang Carnegie Mellon University, Carmen Badea Microsoft Research, Christian Bird Microsoft Research, Robert DeLine Microsoft Research, Denae Ford Microsoft Research, Nicole Forsgren Microsoft Research, Thomas Zimmermann Microsoft Research
Pre-print
11:18
18m
Talk
Do Code Generation Models Think Like Us? - A Study of Attention Alignment between Large Language Models and Human Programmers
Research Papers
Bonan Kou Purdue University, Shengmai Chen Purdue University, Zhijie Wang University of Alberta, Lei Ma The University of Tokyo & University of Alberta, Tianyi Zhang Purdue University
Pre-print
11:36
18m
Talk
Do Words Have Power? Understanding and Fostering Civility in Code Review Discussion
Research Papers
Md Shamimur Rahman University of Saskatchewan, Canada, Zadia Codabux University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Canada
11:54
18m
Talk
Effective Teaching through Code Reviews: Patterns and Anti-Patterns
Research Papers
Anita Sarma Oregon State University, Nina Chen Google
DOI
12:12
18m
Talk
An empirical study on code review activity prediction in practice
Research Papers
Doriane Olewicki Queen's University, Sarra Habchi Ubisoft Montréal, Bram Adams Queen's University
Pre-print