Navigation
Recherche
|
Scholars sneaking phrases into papers to fool AI reviewers
mardi 8 juillet 2025, 00:03 , par TheRegister
Using prompt injections to play a Jedi mind trick on LLMs
A handful of international computer science researchers appear to be trying to influence AI reviews with a new class of prompt injection attack.…
https://go.theregister.com/feed/www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/
Voir aussi |
56 sources (32 en français)
Date Actuelle
sam. 12 juil. - 19:06 CEST
|