Do Code Reviews Lead to Fewer Code Smells? DOI
Erdem Tuna, Carolyn Seaman, Eray Tüzün

et al.

Published: Jan. 1, 2024

Context: The code review process is conducted by software teams with various motivations. Among other goals, reviews act as a gatekeeper for quality. Objective: In this study, we explore whether have an impact on one specific aspect of quality, maintainability. We further extend our investigation analyzing quality influences Method: investigate smells in the are related to that was reviewed using correlation analysis. augment quantitative analysis focus group study learn practitioners' opinions. Results: Our investigations revealed level neither increases nor decreases 8 out 10 reviews, regardless Contrary intuition, found little no smells. identified potential reasons behind counter-intuitive results data. Furthermore, practitioners still believe help improving Conclusion: imply community should update goals practices and reevaluate those align them more relevant modern realities.

Language: Английский

Hold on! is my feedback useful? evaluating the usefulness of code review comments DOI
Sharif Ahmed, Nasir U. Eisty

Empirical Software Engineering, Journal Year: 2025, Volume and Issue: 30(3)

Published: Feb. 21, 2025

Language: Английский

Citations

0

CodeDoctor: multi-category code review comment generation DOI
Yingling Li, Yuhan Wu, Z.M. Wang

et al.

Automated Software Engineering, Journal Year: 2025, Volume and Issue: 32(1)

Published: Feb. 27, 2025

Language: Английский

Citations

0

Adoption of RMVRVM Paradigm in Industrial Setting: An Empirical Study DOI
Lavneet Singh, Saurabh Tiwari

Published: Feb. 20, 2025

Language: Английский

Citations

0

Improving Automated Code Reviews: Learning From Experience DOI
Hong Yi Lin, Patanamon Thongtanunam, Christoph Treude

et al.

Published: April 15, 2024

Modern code review is a critical quality assurance process that widely adopted in both industry and open source software environments.This can help newcomers learn from the feedback of experienced reviewers; however, it often brings large workload stress to reviewers.To alleviate this burden, field automated reviews aims automate process, teaching language models provide on submitted code, just as human would.A recent approach pre-trained fine-tuned intelligent model large-scale corpus.However, such techniques did not fully utilise amongst training data.Indeed, reviewers with higher level experience or familiarity will likely deeper insights than others.In study, we set out investigate whether higher-quality be generated are trained based an experience-aware oversampling technique.Through our quantitative qualitative evaluation, find increase correctness, information, meaningfulness by current state-of-the-art without introducing new data.The results suggest vast amount high-quality underutilised strategies.This work sheds light resource-efficient ways boost models.

Language: Английский

Citations

1

Do code reviews lead to fewer code smells? DOI
Erdem Tuna, Carolyn Seaman, Eray Tüzün

et al.

Journal of Systems and Software, Journal Year: 2024, Volume and Issue: 215, P. 112101 - 112101

Published: May 20, 2024

Language: Английский

Citations

1

Understanding Emojis :) in Useful Code Review Comments DOI
Sharif Ahmed, Nasir U. Eisty

Published: April 20, 2024

Emojis and emoticons serve as non-verbal cues are increasingly prevalent across various platforms, including Modern Code Review.These often carry emotive or instructive weight for developers.Our study dives into the utility of Review comments (CR comments) by scrutinizing sentiments semantics conveyed emojis within these comments.To assess usefulness CR comments, we augment traditional 'textual' features pre-trained embeddings with 'emoji-specific' embeddings.To fortify our inquiry, expand an existing dataset emoji annotations, guided research on GitHub usage, re-evaluate accordingly.Our models, which incorporate textual emoji-based sentiment semantic understandings emojis, substantially outperform baseline metrics.The often-overlooked elements in emerge key indicators usefulness, suggesting that symbols significant weight.

Language: Английский

Citations

1

Towards Automated Classification of Code Review Feedback to Support Analytics DOI
Asif Kamal Turzo,

Fahim Faysal,

Ovi Poddar

et al.

Published: Oct. 26, 2023

Background: As improving code review (CR) effectiveness is a priority for many software development organizations, projects have deployed CR analytics platforms to identify potential improvement areas. The number of issues identified, which crucial metric measure effectiveness, can be misleading if all are placed in the same bin. Therefore, finer-grained classification identified during CRs provide actionable insights improve effectiveness. Although recent work by Fregnan et al. proposed automated models classify CR-induced changes, we noticed two areas – i) classifying comments that do not induce changes and ii) using deep neural networks (DNN) conjunction with context performances. Aims: This study aims develop an comment classifier leverages DNN achieve more reliable performance than Method: Using manually labeled dataset 1,828 comments, trained evaluated supervised learning-based leveraging context, text, set metrics into one five high-level categories Turzo Bosu. Results: Based on our 10-fold cross-validation-based evaluations multiple combinations tokenization approaches, found model CodeBERT achieving best accuracy 59.3%. Our approach outperforms al.'s 18.7% higher accuracy. Conclusion: In addition facilitating improved analytics, useful developers prioritizing feedback selecting reviewers.

Language: Английский

Citations

3

Exploring the Advances in Identifying Useful Code Review Comments DOI
Sharif Ahmed, Nasir U. Eisty

Published: Oct. 26, 2023

Effective peer code review in collaborative software development necessitates useful reviewer comments and supportive automated tools. Code are a central component of the Modern Review process industry open-source development. Therefore, it is important to ensure these serve their purposes. This paper reflects evolution research on usefulness comments. It examines papers that define comments, mine annotate datasets, study developers' perceptions, analyze factors from different aspects, use machine learning classifiers automatically predict Finally, discusses open problems challenges recognizing for future research.

Language: Английский

Citations

1

Do Code Reviews Lead to Fewer Code Smells? DOI
Erdem Tuna, Carolyn Seaman, Eray Tüzün

et al.

Published: Jan. 1, 2024

Context: The code review process is conducted by software teams with various motivations. Among other goals, reviews act as a gatekeeper for quality. Objective: In this study, we explore whether have an impact on one specific aspect of quality, maintainability. We further extend our investigation analyzing quality influences Method: investigate smells in the are related to that was reviewed using correlation analysis. augment quantitative analysis focus group study learn practitioners' opinions. Results: Our investigations revealed level neither increases nor decreases 8 out 10 reviews, regardless Contrary intuition, found little no smells. identified potential reasons behind counter-intuitive results data. Furthermore, practitioners still believe help improving Conclusion: imply community should update goals practices and reevaluate those align them more relevant modern realities.

Language: Английский

Citations

0