MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
realizing
Recherche

How 'sleeper agent' AI assistants can sabotage your code without you realizing

mardi 16 janvier 2024, 22:30 , par TheRegister
Today's safety guardrails won't catch these backdoors, study warns
Analysis AI biz Anthropic has published research showing that large language models (LLMs) can be subverted in a way that safety training doesn't currently address.…
https://go.theregister.com/feed/www.theregister.com/2024/01/16/poisoned_ai_models/

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
lun. 20 mai - 16:07 CEST