Navigation
Recherche
|
Anthropic targets DevSecOps with Claude Code update as AI rivals gear up
jeudi 7 août 2025, 12:21 , par InfoWorld
Anthropic has introduced automated security reviews in its Claude Code product, aiming to help developers identify and fix vulnerabilities earlier in the software development process.
The update includes a GitHub Actions integration and a new “/security-review” command, allowing developers to prompt Claude to scan code for security issues and recommend fixes. The launch follows Anthropic’s release of Claude Opus 4.1, its most advanced AI model to date, which the company says offers major improvements in handling coding tasks. The move highlights growing competition in the AI sector, as rivals including OpenAI prepare to unveil GPT-5 and Meta steps up recruitment efforts with multimillion-dollar offers to top talent. The launch also comes as AI tools gain traction among developers. In a 2025 Stack Overflow survey, 84% of respondents said they are using or plan to use AI in their development workflows, up from 76% in 2024. However, trust in AI-generated output remains mixed. While 33% of developers in the survey said they trust the accuracy of these tools, 46% expressed distrust, and only 3% reported a high level of trust in the results. Rethinking code security Anthropic said the new “/security-review” command allows developers to run ad-hoc security scans from the terminal before committing code. “Run the command in Claude Code, and Claude will search your codebase for potential vulnerabilities and provide detailed explanations of any issues found,” the company said in a statement. The command uses a security-focused prompt to detect common vulnerability patterns, including SQL injection risks, cross-site scripting (XSS) flaws, authentication and authorization issues, insecure data handling, and dependency-related weaknesses. Developers can also instruct Claude Code to apply fixes for the issues it finds, keeping security reviews within the inner development loop and resolving problems early in the development process. Analysts say this functionality signals a shift toward greater accountability in GenAI-assisted software development. Unlike traditional static analysis tools, which often generate high volumes of false positives, Claude uses its large context window to understand code across files and architectural layers. It also provides explainable reasoning for each issue flagged, offering more than just binary alerts. “This enables more intelligent, high-confidence findings,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “This is especially relevant as GenAI-driven development, often referred to as ‘vibe coding’, increases code velocity and complexity. To truly reshape enterprise DevSecOps, Claude must prove its resilience at scale across sprawling codebases, bespoke threat models, and varying compliance mandates.” Claude’s automated reviews could also help teams streamline early-stage security without overburdening human experts. “Claude’s secure code review feature can meaningfully enhance enterprise DevSecOps workflows by automating one of the most time-consuming aspects of the pipeline, that is, manual security reviews,” said Oishi Mazumder, senior analyst at Everest Group. “By allowing developers to initiate reviews using natural language prompts during development, it accelerates shift-left security practices and embeds security earlier in the SDLC.” Pipeline-ready security checks Anthropic said its new GitHub Action for Claude Code improves automated security reviews by analyzing every pull request as it is opened. The tool runs automatically, scans code changes for vulnerabilities, and applies customizable rules to reduce false positives and filter out known issues. It then posts inline comments with recommended fixes directly in the pull request. The feature aims to standardize security reviews across development teams and prevent insecure code from reaching production. It integrates with existing CI/CD pipelines and can be configured to follow an organization’s security policies, according to Anthropic. Analysts say this represents a shift in how generative AI is used in software development. Instead of acting solely as a coding assistant, tools like Claude are beginning to take on roles in security enforcement and governance. “GitHub Copilot remains a popular AI pair programmer and has only recently added pull request-level security suggestions,” Gogia said. “Microsoft Security Copilot, while robust in telemetry-rich SOC environments, still lacks deep integration with development tooling. Google’s Gemini Code Assist provides strong code summarization and quality improvements, but its depth in vulnerability detection remains untested in highly regulated environments.” Benefits and risks for enterprises While AI-assisted code reviews can boost efficiency, analysts caution that they also introduce new risks that enterprise teams must manage carefully. “The greatest risk with enterprise AI security tooling lies in confusing fluency with accuracy,” Gogia pointed out. “Claude Code, like other LLM-based tools, can offer well-articulated but factually incorrect conclusions. This can create a false sense of security that undermines established review protocols.” To fully realize the value of Claude Code, enterprises will need to embed its outputs within structured SDLC controls, including compliance checks, manual oversight, and audit-ready documentation.
https://www.infoworld.com/article/4035583/anthropic-targets-devsecops-with-claude-code-update-as-ai-...
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 7 août - 15:27 CEST
|