MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
code
Recherche

Knowing when to use AI coding assistants

lundi 5 mai 2025, 11:00 , par InfoWorld
Just because you can use generative AI in software development doesn’t mean you should. AI coding assistants powered by large language models (LLMs) are a productivity dream in some cases but a debugging nightmare in others. So, where is that line?

“Knowing when and how to rely on AI code assistants is an important skill to learn,” says Kevin Swiber, API strategist at Layered System. “It’s changing day by day as the technology advances. It’s hard to keep up.”

63% of professional developers currently use AI within their development process, according to Stack Overflow’s 2024 Developer Survey. AI coding assistants are proving to be an incredible time saver for boilerplate code, simple functions, documentation, and debugging.

However, AI-generated code is riddled with quality concerns, and a heavy reliance on it compounds technical debt. Experts view AI agents as less ideal for completely novel coding projects, highly complex architectures, long build cycles, or code reuse.

The short and skinny? AI works better in some situations than others. (Not to harsh your vibe, but vibe coding still requires human supervision.) Below, we’ll consider when AI tools shine and when they don’t, and offer some takeaways for software engineering leaders.

Where AI coding assistants shine

AI performs exceptionally well with common coding patterns. Its sweet spot is generating new code with low complexity when your objectives are well-specified and you’re using popular libraries, says Swiber.

“Web development, mobile development, and relatively boring back-end development are usually fairly straightforward,” adds Charity Majors, co-founder and CTO of Honeycomb. The more common the code and the more online examples, the better AI models perform.

Quicker feedback cycles with AI tend to lead to a better experience. “Tasks with quick feedback loops, like front-end development or writing unit tests, tend to work particularly well,” says Majors. “If it takes you two hours to deploy your back-end code, this will be more challenging.”

Harry Wang, chief growth officer at Sonar, says AI excels at well-understood programming tasks like scaffolding microservices, generating REST APIs, or prototyping new ideas.

“AI coding assistants truly shine when they augment developers, taking on routine and repetitive tasks like generating boilerplate code or suggesting code snippets, functions, or even entire classes,” Wang says. “They accelerate rapid prototyping, exploratory design, and experimental coding, turning initial ideas into tangible code much faster.”

Then, there are all the practical tasks AI can achieve for developers outside the actual code. Spencer Kimball, CEO of Cockroach Labs, describes how their engineers often use AI for design scaffolding, fixing tests, observability data, and blogging. 70% of the time, that’s not direct coding, but it’s giving back more time to developers to program, he says.

Where AI coding assistants fall short

In other situations, you may struggle to get AI working. Generative AI tools can falter when engineering goals go beyond a one-off function, aren’t well-specified, involve large-scale refactoring, or span entirely novel projects with complex requirements.

“You can waste a lot of time and money—and literally lose code—if you just let it do its own thing,” says Layered System’s Swiber. This risk grows if you’re not reviewing outputs regularly or using version control.

Honeycomb’s Charity mostly agrees: “AI is much better at generating greenfield code than it is at modifying or extending an existing code base.” Exceptions include large language models trained on that precise task, she adds.

While AI accelerates development, it creates a new burden to review and validate the resulting code. “In a worst-case scenario, the time and effort required to debug and fix subtle issues in AI-generated code could even eclipse the time it would require to write the code from scratch,” says Sonar’s Wang.

Quality and security can suffer from vague prompts or poor contextual understanding, especially in large, complex code bases. Transformer-based models also face limitations with token windows, making it harder to grasp projects with many parts or domain-specific constraints.

“We’ve seen cases where AI outputs are syntactically correct but contain logical errors or subtle bugs,” Wang notes. These mistakes originate from a “black box” process, he says, making AI risky for mission-critical enterprise applications that require strict governance.

“Early-stage projects benefit from AI’s flexibility, while mature code bases demand caution due to risks of context loss and integration conflicts,” says Wang. Part of this is a lack of access to the proper context and data for the use case at hand.

Although Cockroach Labs’ Kimball admits AI is improving, the complexity of the company’s software still demands too much contextual awareness to make AI coding assistants worthwhile 100% of the time. Instead, developers need to black-box parts of the system and focus on local scope. “You want to understand the things that are attached to the one file you’re looking at,” he says.

Although Cockroach Labs’ Kimball acknowledges that AI coding tools are improving, the complexity of Cockroach’s massive code base still poses challenges for AI assistants. “There’s way too much context,” he says. Instead of attempting to load everything, he explains how you can stay productive by narrowing your focus to local context and related interfaces. “You want to understand the things that are attached to the one file you’re looking at, and black box some of those things.”By treating parts of the system as abstractions, developers can work iteratively within a smaller scope—a mindset Kimball says helps developers stay productive, even in complex systems like Cockroach’s.

What engineering leaders should know

“It’s no accident everyone’s interested in AI, because it’s a paradigm shift on the same level of electrification or computerization,” adds Kimball, who recently experimented hands-on with vibe coding using Model Context Protocol (MCP) servers wrapped around Cockroach’s APIs.

“As a CEO, it gives you a bit of perspective on what’s possible,” Kimball says. “If you can get a 30% boost in productivity, it’s like hiring 30 people.” Although overspending on AI is a valid concern, the cost pales in comparison to hiring additional engineers, he says.

In fact, AI can give companies an edge. “Don’t worry about spending in the short term—figure out how to use this stuff,” says Kimball. “It’s much better to be a 500-person company than a 5,000-person company.” To his point, new research from DX found that mid-size companies had the highest revenue per engineer compared to other company sizes.

Executives are hot on AI at the moment. Shopify’s CEO’s AI mandate is anticipated to usher in similar decrees and affect hiring. But while AI fervor mounts, the onus is on leaders to understand the limitations of AI and begin delineating boundaries.

Deploying AI willy-nilly can quickly lead to frustrating outcomes—like a model getting itself stuck in a recursive loop of failed tests, says Swiber. “You can’t just set these things off and let them go. You need to monitor what they’re doing.”

Leaders can’t afford to sit on their laurels, either. The fact is, developers will use generative AI regardless of whether they have approval yet. 64% of software developers who use generative AI began using the technology before they were officially granted licenses to do so, according to a 2024 report from BlueOptima.

Both developers and leadership should gain familiarity with AI coding assistants to understand their strengths and weaknesses. This awareness will be critical to rolling them out effectively. 

The worst the models will ever be

The challenge is that, given the rapid pace of change, AI discussions often become irrelevant in a few short months… or even weeks. “AI coding assistants are changing rapidly, so anything we say about them probably has a short shelf life,” says Majors.

The future capabilities of AI are hard to forecast. But more and more developers are bullish on its role in their day-to-day workflows and big picture goals. Salesforce’s latest State of IT survey found that 92% of developers expect agentic AI to advance their careers.

For Kimball, agentic AI will open countless doors and pose new threat vectors. “We’re gonna start going from billions to tens of billions to hundreds of billions, maybe even trillions of active things out there that are ultimately hitting APIs more than ever.”

At the enterprise level, the industry must start considering data sovereignty, he adds, because regional data restrictions are rising and agentic AI will lower the threshold for data access. Ultimately, data providers will have to satisfy these regulations and learn how to appropriately secure their data.

Context window limits—the amount of text that a model can consider at once—are what’s really holding back LLMs, but they’re constantly improving. What happens when context windows reach millions or hundreds of millions of tokens? Many of the issues surrounding AI in large code bases could evaporate.

As it stands now, issues still present themselves when working with LLMs for different coding tasks, requiring keen insight on when (and how) to use them wisely. Yet, as Kimball reminds us, AI coding tools are improving exponentially, and we’re only at the beginning.

“The future of software is AI,” he says. “This is the worst the models are ever going to be.”
https://www.infoworld.com/article/3973969/knowing-when-to-use-ai-coding-assistants.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mar. 6 mai - 10:53 CEST