Navigation
Recherche
|
DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
vendredi 31 janvier 2025, 19:30 , par Wired: Tech.
Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 31 janv. - 23:55 CET
|