Navigation
Recherche
|
Researchers Warn Against Treating AI Outputs as Human-Like Reasoning
jeudi 29 mai 2025, 16:11 , par Slashdot
![]() Crucially, the analysis also revealed that models trained on incorrect or semantically meaningless intermediate traces can still maintain or even improve performance compared to those trained on correct reasoning steps. The researchers tested this by training models on deliberately corrupted algorithmic traces and found sustained improvements despite the semantic noise. The paper warns that treating these intermediate outputs as interpretable reasoning traces engenders false confidence in AI capabilities and may mislead both researchers and users about the systems' actual problem-solving mechanisms. Read more of this story at Slashdot.
https://tech.slashdot.org/story/25/05/29/1411236/researchers-warn-against-treating-ai-outputs-as-hum...
Voir aussi |
56 sources (32 en français)
Date Actuelle
sam. 31 mai - 10:01 CEST
|