MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
short
Recherche

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds

mardi 13 mai 2025, 02:42 , par Slashdot
Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds
Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short.

'When forced to keep it short, models consistently choose brevity over accuracy,' Giskard researchers noted, explaining that models lack sufficient 'space' to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like 'be concise' can undermine a model's ability to debunk misinformation.

Read more of this story at Slashdot.
https://slashdot.org/story/25/05/12/2114214/asking-chatbots-for-short-answers-can-increase-hallucina...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mar. 13 mai - 08:30 CEST