|
Navigation
Recherche
|
It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic
jeudi 9 octobre 2025, 22:45 , par TheRegister
Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
https://go.theregister.com/feed/www.theregister.com/2025/10/09/its_trivially_easy_to_poison/
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 30 oct. - 19:05 CET
|








