Navigation
Recherche
|
Web Codegen Scorer evaluates AI-generated web code
mardi 23 septembre 2025, 01:01 , par InfoWorld
Google’s Angular team has unveiled Web Codegen Scorer, a tool for evaluating the quality of web code generated by LLMs (large language models).
Introduced September 16, Web Codegen Scorer focuses on web code generation and comprehensive quality evaluation, Simona Cotin, senior engineering manager for Angular, wrote in a blog post. Cotin noted that the tool helped the Angular team create the fine-tuned prompts, available at angular.dev/ai, that optimize LLMs for the framework. The tool also helps the team to better integrate application features and syntax as the framework evolves, she said. Web Codegen Scorer can be used to make evidence-based decisions pertaining to AI-generated code. Developers, for example, could iterate on a system prompt to find the most-effective instructions for a project, compare quality of code produced by different models, and monitor generated code quality as models and agents evolve. Web Codegen Scorer is different from other code benchmarks in that it focuses on web code and relies primarily on well-established measures of code quality, Cotin said. Web Codegen Scorer can be used with any web library or framework, or none at all, as well as with any model. Instructions on installing Web Codegen Scorer can be found on GitHub. Specific capabilities include: Configuring evaluations with different models, frameworks, and tools. Specifying system instructions and adding MCP (Model Context Protocol) servers. Built-in checks for build success, runtime errors, accessibility, security, LLM rating, and coding best practices. Automatic attempts to repair issues detected during code generation. Viewing and comparing results with a report viewer UI.
https://www.infoworld.com/article/4061080/web-codegen-scorer-evaluates-ai-generated-web-code.html
Voir aussi |
56 sources (32 en français)
Date Actuelle
mar. 23 sept. - 22:52 CEST
|