|
Navigation
Recherche
|
AI-assisted coding creates more problems – report
vendredi 19 décembre 2025, 10:00 , par InfoWorld
AI code generation appears to have a few kinks to work out before it can fully dominate software development, according to a new report by CodeRabbit. When compared to human-generated code, AI code created 1.7 times more problems discovered in pull-request analysis, according to the report.
AI coding assistants have become a standard part of the software development workflow, but developers have raised alarms, the report said. On average, pull requests for AI-generated code found 10.83 issues per pull request, while human-generated code contained an average of 6.45, said CodeRabbit. Pull requests for AI-coauthored code also showed a higher spike in issues. However, according to CodeRabbit, distribution was the more important story: AI-generated pull requests had a much heavier tail, meaning they produced far more “busy” reviews. And AI pull requests were harder to review in multiple ways. Teams adopting AI coding tools should expect higher variance and more frequent spikes in pull-request issues that demand deeper scrutiny, according to the report. Overall, pull requests of AI-generated code found the highest number of issues related to logic and correctness. But within every major category including correctness, maintainability, security, and performance, AI co-authored code consistently generated more issues than code generated by humans alone, said the report. In the report released on December 17, CodeRabbit said it had analyzed 470 open source GitHub pull requests including 320 AI-co-authored pull requests and 150 that were likely generated by humans alone. In the blog post introducing the report, the company said the results were, “Clear, measurable, and consistent with what many developers have been feeling intuitively: AI accelerates output, but it also amplifies certain categories of mistakes.” The report also found security issues increasing consistently in AI co-authored pull requests. While none of the noted vulnerabilities were unique to AI-generated code, they appeared significantly more often, increasing the overall risk profile of AI-assisted development. AI makes dangerous security mistakes that development teams must get better at catching, advised the report. There were, however, some advantages with AI, said the report. Spelling errors were almost twice as common in human-authored code (18.92 vs. 10.77). This might be because human coders write far more inline prose and comments, or it could just be that developers were “bad at spelling,” the report speculated. Testability issues also appeared more frequently in human code (23.65 vs. 17.85). Nonetheless, the overall findings indicate that guardrails are needed as AI-generated code becomes a standard part of the workflow, CodeRabbit said. Project-specific context should be provided up-front, with models accessing constraints, such as invariants, config patterns, and architectural rules. To reduce issues with readability, formatting, and naming, strict CI rules should be applied. For correctness, developers should require pre-merge tests for any non-trivial control flow. Security defaults should be codified. Also, developers should encourage idiomatic data structures, batched I/O, and pagination. Smoke tests should be done for I/O-heavy or resource-sensitive paths. AI-aware pull-request checklists should be adopted, and a third-party code review tool should be used. Other findings from the report include the following: Severity escalates with AI, with more critical and major issues happening. AI introduced nearly two times more naming inconsistencies; unclear naming, mismatched terminology, and generic identifiers appeared frequently. AI code “looks right” at a glance but often violates local idioms or structure. AI-generated code often created issues correlated to real-world outages. Performance regressions are rare but are disproportionately AI-driven. Incorrect ordering, faulty dependency flow, or misuse of concurrency primitives appeared far more frequently in AI pull-requests. Formatting problems were 2.66 times more common in the AI pull requests.
https://www.infoworld.com/article/4109129/ai-assisted-coding-creates-more-problems-report.html
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 19 déc. - 12:45 CET
|








