Navigation
Recherche
|
A New Trick Uses AI to Jailbreak AI Models—Including GPT-4
mardi 5 décembre 2023, 12:00 , par Wired: Tech.
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.
https://www.wired.com/story/automated-ai-attack-gpt-4/
|
56 sources (32 en français)
Date Actuelle
dim. 11 mai - 19:31 CEST
|