MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
attacks
Recherche

LLM Attacks Take Just 42 Seconds On Average, 20% of Jailbreaks Succeed

dimanche 13 octobre 2024, 13:34 , par Slashdot
LLM Attacks Take Just 42 Seconds On Average, 20% of Jailbreaks Succeed
spatwei shared an article from SC World:

Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to Pillar Security.
Pillar's State of Attacks on GenAI report, published Wednesday, revealed new insights on LLM attacks and jailbreaks, based on telemetry data and real-life attack examples from more than 2,000 AI applications. LLM jailbreaks successfully bypass model guardrails in one out of every five attempts, the Pillar researchers also found, with the speed and ease of LLM exploits demonstrating the risks posed by the growing generative AI (GenAI) attack surface...
The more than 2,000 LLM apps studied for the State of Attacks on GenAI report spanned multiple industries and use cases, with virtual customer support chatbots being the most prevalent use case, making up 57.6% of all apps.

Common jailbreak techniques included 'ignore previous instructions' and 'ADMIN override', or just using base64 encoding. 'The Pillar researchers found that attacks on LLMs took an average of 42 seconds to complete, with the shortest attack taking just 4 seconds and the longest taking 14 minutes to complete.

'Attacks also only involved five total interactions with the LLM on average, further demonstrating the brevity and simplicity of attacks.'

Read more of this story at Slashdot.
https://it.slashdot.org/story/24/10/12/213247/llm-attacks-take-just-42-seconds-on-average-20-of-jail...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
mer. 16 oct. - 14:17 CEST