MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
models
Recherche

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

mardi 5 décembre 2023, 12:00 , par Wired: Tech.
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.
https://www.wired.com/story/automated-ai-attack-gpt-4/

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
lun. 20 mai - 12:30 CEST