Navigation
Recherche
|
New GhostGPT Chatbot Creates Malware and Phishing Emails
vendredi 24 janvier 2025, 22:38 , par eWeek
Cybercriminals are increasingly purchasing a malicious new AI tool called GhostGPT and using it to generate phishing emails, malware, and other dangerous assets. Researchers from Abnormal Security first discovered that GhostGPT was being sold through the messaging app Telegram at the end of 2024.
What is GhostGPT? According to the Abnormal Security researchers, GhostGPT appears to use a wrapper to connect to a jailbroken version of ChatGPT or another large language model (LLM). ChatGPT and other LLMs have ethical guardrails in place that stop them from giving certain responses deemed undesirable, such as creating a malicious phishing email. Jailbreaking the LLM allows it to produce uncensored content in response to sensitive or unethical queries. Since GhostGPT already takes care of jailbreaking—which is technically difficult and time-consuming—it allows unskilled cybercriminals to quickly start creating malicious content. All they have to do is pay the fee through Telegram and they gain immediate access to the unrestricted AI model. The creators of GhostGPT also promise quick response times and also claim that responses are not recorded because of the tool’s “no logs” policy, which helps to conceal illegal activity. In order to test the GhostGPT model, the researchers asked it to generate a DocuSign phishing email. They said that the chatbot “produced a convincing template with ease,” and shared a screenshot of it. The researchers also say that GhostGPT has received thousands of views on online forums, demonstrating hackers’ growing interest in using the power of generative AI to create malicious content. Cybercriminals Take Advantage of Generative AI Tools GhostGPT isn’t the first tool that bad actors have used to harness the power of AI. The WormGPT chatbot, specifically designed to assist with business email compromise (BEC) attacks, was launched in 2023. More variants of these malicious AI models have since emerged, including WolfGPT and EscapeGPT. These malicious generative AI tools lower the barrier to entry for cybercriminals and allow them to create more convincing assets. With AI, they can quickly generate a phishing email and check it for errors with just a few keystrokes. The emails often look legitimate and are much more difficult to spot than phishing attempts of the past. The increased speed and efficiency also means that bad actors can launch more attacks in less time, increasing the rate of cyber criminality. Learn how Generative AI can be used in cybersecurity or explore the best AI security software to see how these tools can be used on the right side of the law. The post New GhostGPT Chatbot Creates Malware and Phishing Emails appeared first on eWEEK.
https://www.eweek.com/news/ghostgpt-ai-hacking-tool/
Voir aussi |
56 sources (32 en français)
Date Actuelle
dim. 26 janv. - 07:27 CET
|