Navigation
Recherche
|
DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
vendredi 31 janvier 2025, 19:30 , par Wired: Cult of Mac
Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/
Voir aussi |
59 sources (15 en français)
Date Actuelle
jeu. 3 avril - 13:28 CEST
|