Navigation
Recherche
|
One long sentence is all it takes to make LLMs misbehave
mardi 26 août 2025, 10:34 , par TheRegister
Chatbots ignore their guardrails when your grammar sucks, researchers find
Security researchers from Palo Alto Networks' Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it's quite simple.…
https://go.theregister.com/feed/www.theregister.com/2025/08/26/breaking_llms_for_fun/
Voir aussi |
56 sources (32 en français)
Date Actuelle
mar. 26 août - 13:12 CEST
|