Navigation
Recherche
|
GenAI vulnerable to prompt injection attacks
jeudi 15 mai 2025, 15:06 , par BetaNews
New research shows that one in 10 prompt injection atempts against GenAI systems manage to bypass basic guardrails. Their non-deterministic nature also means failed attempts can suddenly succeed, even with identical content. AI security company Pangea ran a Prompt Injection Challenge in March this year. The month-long initiative attracted more than 800 participants from 85 countries who attempted to bypass AI security guardrails across three virtual rooms with increasing levels of difficulty. The challenge generated nearly 330,000 prompt injection attempts using more than 300 million tokens, creating a comprehensive dataset that reveals blindspots in how organizations are currently securing their… [Continue Reading]
https://betanews.com/2025/05/15/genai-vulnerable-to-prompt-injection-attacks/
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 16 mai - 23:26 CEST
|