Navigation
Recherche
|
Popular LLMs produce insecure code by default
jeudi 24 avril 2025, 15:44 , par BetaNews
A new study from Backslash Security looks at seven current versions of OpenAI's GPT, Anthropic's Claude and Google's Gemini to test the influence varying prompting techniques have on their ability to produce secure code. Three tiers of prompting techniques, ranging from 'naive' to 'comprehensive,' were used to generate code for everyday use cases. Code output was measured by its resilience against 10 Common Weakness Enumeration (CWE) use cases. The results show that although secure code output success rises with prompt sophistication all LLMs generally produced insecure code by default. In response to simple, 'naive' prompts, all LLMs tested generated insecure… [Continue Reading]
https://betanews.com/2025/04/24/popular-llms-produce-insecure-code-by-default/
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 25 avril - 00:16 CEST
|