MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
disinformation
Recherche

3.6 Million Lies: Is Your AI Chatbot Spewing Russian Disinformation?

vendredi 7 mars 2025, 21:53 , par eWeek
News reliability rating service NewsGuard reported that leading generative AI chatbots — including OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot — are inadvertently spreading Russian propaganda. The findings indicate that these systems have been influenced by a Moscow-based disinformation network known as Pravda, which has flooded them with false narratives.

A recent audit of AI chatbots from major tech companies found that they frequently echo Russian disinformation, with these misleading claims appearing in about 33 percent of chatbot responses. The findings add to growing concerns about AI-driven misinformation, particularly as networks like Pravda target these models to manipulate their outputs.

Pravda: How is it manipulating AI?

Pravda is a network of about 150 websites spreading pro-Kremlin propaganda by aggregating content from Russian state-controlled media and government sources. Established in 2022, it aims to influence global discourse by flooding the internet with false claims, such as baseless accusations about U.S. bioweapons labs in Ukraine and Ukrainian President Volodymyr Zelenskyy’s alleged misuse of U.S. military aid. These fabricated claims have seeped into AI chatbot responses, polluting them with misinformation.

Also called the Portal Kombat, Pravda deliberately tricks search engines and web crawlers to embed its propaganda into AI training datasets. By exploiting ranking algorithms, it subtly influences AI chatbots’ responses — resulting in them perpetuating misinformation. In 2024 alone, Pravda’s massive network contributed more than 3.6 million articles, according to the American Sunlight Project. These findings, together with NewsGuard’s report, highlight how unchecked deceptive claims undermine the integrity of AI-generated content.

Can AI be trusted? Growing reliability concerns

The manipulation of generative AI chatbots from top AI companies like OpenAI, Google, and Microsoft raises serious concerns about the reliability of AI-generated content. Despite these companies’ vast resources and safeguards, their AI solutions remain vulnerable to disinformation campaigns. Given the global reach of these platforms, this issue casts doubt on the trustworthiness of AI responses and their ability to filter out deceptive narratives.

Protecting your organization from AI disinformation

As more companies rely on artificial intelligence for daily operations, the risk of false information corrupting enterprise AI tools increases. Unchecked disinformation can erode trust, mislead employees, and damage corporate credibility. To mitigate AI-driven misinformation, organizations should implement rigorous audits, enforce real-time data validation, and train teams to identify and correct inaccurate AI generated content immediately.
The post 3.6 Million Lies: Is Your AI Chatbot Spewing Russian Disinformation? appeared first on eWEEK.
https://www.eweek.com/news/ai-chatbots-russian-disinformation/

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
dim. 9 mars - 21:16 CET