|
Navigation
Recherche
|
Researchers find hole in AI guardrails by using strings like =coffee
vendredi 14 novembre 2025, 22:19 , par TheRegister
Who guards the guardrails? Often the same shoddy security as the rest of the AI stack
Large language models frequently ship with 'guardrails' designed to catch malicious input and harmful output. But if you use the right word or phrase in your prompt, you can defeat these restrictions.…
https://go.theregister.com/feed/www.theregister.com/2025/11/14/ai_guardrails_prompt_injections_echog...
Voir aussi |
56 sources (32 en français)
Date Actuelle
sam. 15 nov. - 00:26 CET
|








