Navigation
Recherche
|
Apple Study Reveals Critical Flaws in AI's Logical Reasoning Abilities
mardi 15 octobre 2024, 21:21 , par Slashdot
Apple's AI research team has uncovered significant weaknesses in the reasoning abilities of large language models, according to a newly published study. MacRumors: The study, published on arXiv [PDF], outlines Apple's evaluation of a range of leading language models, including those from OpenAI, Meta, and other prominent developers, to determine how well these models could handle mathematical reasoning tasks. The findings reveal that even slight changes in the phrasing of questions can cause major discrepancies in model performance that can undermine their reliability in scenarios requiring logical consistency.
Apple draws attention to a persistent problem in language models: their reliance on pattern matching rather than genuine logical reasoning. In several tests, the researchers demonstrated that adding irrelevant information to a question -- details that should not affect the mathematical outcome -- can lead to vastly different answers from the models. Read more of this story at Slashdot.
https://apple.slashdot.org/story/24/10/15/1840242/apple-study-reveals-critical-flaws-in-ais-logical-...
Voir aussi |
56 sources (32 en français)
Date Actuelle
dim. 22 déc. - 12:44 CET
|