Enhancing Code Review Efficiency: Automated Pull Request Evaluation using Natural Language Processing and Machine Learning DOI Creative Commons
Przemysław Wincenty Zydroń, Jarosław Protasiewicz

Advances in Science and Technology – Research Journal, Год журнала: 2023, Номер 17(4), С. 162 - 167

Опубликована: Авг. 7, 2023

The practice of code review is crucial in software development to improve quality and promote knowledge exchange among team members.It requires identifying qualified reviewers with the necessary expertise experience thoroughly examine modifications suggested a pull request efficiency process.However, it can be costly time-consuming for maintainers manually assign suitable each large-scale projects.To address this challenge, various techniques, including machine learning, heuristic-based algorithms, social network analysis, have been employed suggest requests automatically.The primary challenge recommending verifying whether accurate.While there attempts replicate previous recommendation processes or propose new methods, evaluating correctness reviews remains area research.New approaches are emerging assess reviews, but further research needed develop more reliable methods that applied contexts.This study investigates an automated evaluation accuracy its impact on possible.One possible approach use pre-trained language model like ChatGPT3 extract key information from text.Another method NLP techniques automatically generate annotations text, which could then used train learning predict accurately.Automated mechanisms potential positively both open source industry projects by increasing transparency accountability process improving overall project outcomes.Therefore, developing implementing effective systems areas significant benefits.

Язык: Английский

Navigating software development in the ChatGPT and GitHub Copilot era DOI Creative Commons
Stephen L. France

Business Horizons, Год журнала: 2024, Номер unknown

Опубликована: Май 1, 2024

Generative AI (artificial intelligence) technologies using LLMs (large language models), such as ChatGPT and GitHub Copilot, with the ability to create code, have potential change software development landscape. Will this process be incremental, developers learning generative skills supplement their existing skills, or will more destructive, loss of large numbers jobs a radical in responsibilities remaining developers? Given rapid growth capabilities, it is impossible provide crystal ball, but article aims give insight into adoption development. The gives an overview industry job functions developers. A literature review combined content analysis online comments from how implemented changing are responding these changes. ties academic developer insights together recommendations for developers, describes CMM (capability maturity model) framework assessing improving LLM usage.

Язык: Английский

Процитировано

7

Towards Efficient Fine-tuning of Language Models with Organizational Data for Automated Software Review DOI
Mona Nashaat, James Miller

IEEE Transactions on Software Engineering, Год журнала: 2024, Номер 50(9), С. 2240 - 2253

Опубликована: Июль 15, 2024

Язык: Английский

Процитировано

4

A fine-grained taxonomy of code review feedback in TypeScript projects DOI
Nicole Davila, Ingrid Nunes, Igor Wiese

и другие.

Empirical Software Engineering, Год журнала: 2025, Номер 30(2)

Опубликована: Янв. 14, 2025

Язык: Английский

Процитировано

0

Mutation-Based Approach to Supporting Human–Machine Pair Inspection DOI Open Access

Yong Dai,

Shaoying Liu, Haiyi Liu

и другие.

Electronics, Год журнала: 2025, Номер 14(2), С. 382 - 382

Опубликована: Янв. 19, 2025

Human–machine pair inspection refers to a technique that supports programmers and machines working together as “pair” in source code tasks. The machine provides guidance, while the programmer performs based on this guidance. Although are often best suited inspect their own due familiarity, overconfidence may lead them overlook important details. This study introduces novel mutation-based human–machine method, which is designed direct programmer’s attention specific components by applying targeted mutations. We assess effectiveness of inspections analyzing corrections these Our approach involves defining mutation operators for each keyword program historical defects, developing rules keywords strategy automatically generating mutants, designing comparison quantitatively evaluate quality. Through controlled experiment, we demonstrate aiding during process.

Язык: Английский

Процитировано

0

Accountability in Code Review: The Role of Intrinsic Drivers and the Impact of LLMs DOI Open Access
Adam Alami, Victor Vadmand Jensen, Neil A. Ernst

и другие.

ACM Transactions on Software Engineering and Methodology, Год журнала: 2025, Номер unknown

Опубликована: Фев. 28, 2025

Accountability is an innate part of social systems. It maintains stability and ensures positive pressure on individuals’ decision-making. As actors in a system, software developers are accountable to their team organization for decisions. However, the drivers accountability how it changes behavior development less understood. In this study, we look at aspects code review affect engineers’ sense quality. Since engineering (SE) increasingly involving Large Language Models (LLM) assistance, also evaluate impact when introducing LLM-assisted reviews. We carried out two-phased sequential qualitative study ( \(\textbf{interviews}\rightarrow\textbf{focus groups}\) ). Phase I (16 interviews), sought investigate intrinsic engineers influencing quality, relying self-reported claims. II, tested these traits more natural setting by simulating traditional peer-led reviews with focus groups then sessions. found that there four key quality: personal standards , professional integrity pride quality maintaining one’s reputation . review, observed transition from individual collective initiated. introduction disrupts process, challenging reciprocity taking place evaluations, i.e., one cannot be LLM. Our findings imply AI into SE must preserve mechanisms.

Язык: Английский

Процитировано

0

Exploring the use of LLMs for the Selection phase in systematic literature studies DOI Creative Commons
Lukas Thode, Umar Iftikhar, Daniel Méndez

и другие.

Information and Software Technology, Год журнала: 2025, Номер unknown, С. 107757 - 107757

Опубликована: Май 1, 2025

Язык: Английский

Процитировано

0

Tales From the Trenches: Expectations and Challenges From Practice for Code Review in the Generative AI Era DOI
Nicole Davila, Jorge Melegati, Igor Wiese

и другие.

IEEE Software, Год журнала: 2024, Номер 41(6), С. 38 - 45

Опубликована: Июль 18, 2024

In this study, we investigate what has been discussed about generative AI in the code review context by performing a gray literature review. We analyzed 42 documents and found insights from practice proposals of solutions using models.

Язык: Английский

Процитировано

2

The upper bound of information diffusion in code review DOI Creative Commons
Michael Dorner, Daniel Méndez, Krzysztof Wnuk

и другие.

Empirical Software Engineering, Год журнала: 2024, Номер 30(1)

Опубликована: Окт. 17, 2024

Язык: Английский

Процитировано

1

Software Security Analysis in 2030 and Beyond: A Research Roadmap DOI Open Access
Marcel Böhme, Eric Bodden, Tevfik Bultan

и другие.

ACM Transactions on Software Engineering and Methodology, Год журнала: 2024, Номер unknown

Опубликована: Дек. 19, 2024

As our lives, businesses, and indeed world economy become increasingly reliant on the secure operation of many interconnected software systems, engineering research community is faced with unprecedented challenges, but also exciting new opportunities. In this roadmap paper, we outline vision Software Security Analysis for systems future. Given recent advances in generative AI, need methods to assess maximize security code co-written by machines. heterogeneous, practical approaches that work even if some functions are automatically generated, e.g., deep neural networks. depend evermore supply chain, tools scale an entire ecosystem. What kind vulnerabilities exist future how do detect them? When all shallow bugs found, discover hidden deeply system? Assuming cannot find flaws, can nevertheless protect To answer these questions, start a survey security, then discuss open challenges opportunities, conclude long-term perspective field.

Язык: Английский

Процитировано

1

Code Reviewer Recommendation Based on a Hypergraph with Multiplex Relationships DOI

Yu Qiao,

Jian Wang, Can Cheng

и другие.

2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Год журнала: 2024, Номер unknown, С. 417 - 428

Опубликована: Март 12, 2024

Язык: Английский

Процитировано

0