Navigation
Recherche
|
AI Hallucinations Lead To a New Cyber Threat: Slopsquatting
mardi 22 avril 2025, 03:40 , par Slashdot
![]() These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates 'that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts.' This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be 'semantically convincing.' Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. 'Only 13% of hallucinations were simple off-by-one typos,' Socket added. The research can found be in a paper on arXiv.org (PDF). Read more of this story at Slashdot.
https://it.slashdot.org/story/25/04/22/0118200/ai-hallucinations-lead-to-a-new-cyber-threat-slopsqua...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mar. 22 avril - 14:30 CEST
|