Navigation
Recherche
|
How 'sleeper agent' AI assistants can sabotage your code without you realizing
mardi 16 janvier 2024, 22:30 , par TheRegister
Today's safety guardrails won't catch these backdoors, study warns
Analysis AI biz Anthropic has published research showing that large language models (LLMs) can be subverted in a way that safety training doesn't currently address.…
https://go.theregister.com/feed/www.theregister.com/2024/01/16/poisoned_ai_models/
Voir aussi |
56 sources (32 en français)
Date Actuelle
lun. 20 mai - 16:07 CEST
|