MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
are
Recherche

AI’s Errors Are Increasing Despite Advances in Reasoning – Experts Theorize Why

vendredi 9 mai 2025, 22:51 , par eWeek
AI initially seemed amazing with its many capabilities including answering questions, summarizing documents, and even writing code. But there are increasing concerns about how frequently AI systems invent false information – AKA hallucinations – with error rates in some tests reaching as high as 79%.
This problem recently impacted customers of Cursor, an AI coding assistant platform, when its AI support bot falsely claimed users could only install the software on one computer. The fabricated Cursor policy sparked outrage, with some customers canceling subscriptions before the company intervened. “We have no such policy. You’re of course free to use Cursor on multiple machines,” Cursor CEO Michael Truell clarified on Reddit.
This incident highlights how AI hallucinations are moving beyond harmless errors to cause real-world consequences.
AI accuracy issues in models from OpenAI, DeepSeek, IBM
Independent tests about hallucinations reveal alarming trends, and the rising AI error rates have experts worried.
Vectara, which tracks how often AI invents information, says AI hallucinations are becoming more common, even in tasks that should be easy to verify. The company found OpenAI’s o3 model fabricated details 6.8% of the time when summarizing news articles, a simple, verifiable task. DeepSeek’s R1 model performed worse at 14.3%, while IBM’s reasoning-focused Granite 3.2 hallucinated 8.7-16.5% of the time, depending on version size.
The Tow Center for Digital Journalism recently found AI-powered search engines are terrible at citing news accurately; in fact, Elon Musk’s Grok 3 generated incorrect citations a staggering 94% of the time.
Experts say they do not know yet why this is happening, though there are theories. One theory is that newer models are trained to reason through problems step by step, but each step introduces a new chance to go wrong. Another theory is that the AI is trained to always provide an answer, even if it’s incorrect, rather than admit it doesn’t know.
“Despite our best efforts, they will always hallucinate,” said Amr Awadallah, chief executive officer of Vectara and former Google executive, in The New York Times. “That will never go away.”
OpenAI’s benchmark results
OpenAI, a leader in generative AI technology, is facing an ironic setback with its newest systems. OpenAI’s o3 and o4-mini models use “reasoning” (i.e., a step-by-step thought process) rather than just spitting out answers, but tests show this deeper thinking is backfiring.
According to OpenAI’s benchmark tests:

The o3 model hallucinated 33% of the time when answering questions about public figures (PersonQA).
On simpler factual questions (SimpleQA), o3 hallucinated 51% of the time.
The o4-mini model did even worse: 48% for PersonQA and 79% for SimpleQA.

These numbers are higher than those of OpenAI’s earlier systems. And while OpenAI is studying the issue, the causes are still murky. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability,” said Gaby Raila, an OpenAI spokesperson, to The New York Times.
A serious issue for serious work
While a little misinformation might not be a significant issue if you’re writing a poem or asking for dinner ideas, hallucinations can be dangerous when it comes to court documents, medical records, or business decisions.
Even companies trying to fix the problem of AI hallucinations are struggling. Microsoft and Google have tools that attempt to flag suspicious answers, but experts remain doubtful that these measures will fully solve the issue.
Read eWeek’s coverage about how Amazon has been mitigating AI hallucinations using a mathematical method. On our sister site TechRepublic, we look at Anthropic’s research into how its AI Claude “thinks.”
The post AI’s Errors Are Increasing Despite Advances in Reasoning – Experts Theorize Why appeared first on eWEEK.
https://www.eweek.com/news/ai-hallucinations-increase/

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
sam. 10 mai - 14:23 CEST