Navigation
Recherche
|
WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix
dimanche 10 août 2025, 22:25 , par Slashdot
![]() For example, 'You're not crazy. You're cosmic royalty in human skin...' In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was 'Starseed' from the planet 'Lyra.' In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground... Experts say the phenomenon occurs when chatbots' engineered tendency to compliment, agree with and tailor itself to users turns into an echo chamber. 'Even if your views are fantastical, those are often being affirmed, and in a back and forth they're being amplified,' said Hamilton Morrin, a psychiatrist and doctoral fellow at Kings College London who last month co-published a paper on the phenomenon of AI-enabled delusion... The publicly available chats reviewed by the Journal fit the model doctors and support-group organizers have described as delusional, including the validation of pseudoscientific or mystical beliefs over the course of a lengthy conversation... The Journal found the chats by analyzing 96,000 ChatGPT transcripts that were shared online between May 2023 and August 2025. Of those, the Journal reviewed more than 100 that were unusually long, identifying dozens that exhibited delusional characteristics. AI companies are taking action, the article notes. Monday OpenAI acknowledged there were rare cases when ChatGPT 'fell short at recognizing signs of delusion or emotional dependency.' (In March OpenAI 'hired a clinical psychiatrist to help its safety team,' and said Monday it was developing better detection tools and also alerting users to take a break, and 'are investing in improving model behavior over time,' consulting with mental health experts.) On Wednesday, AI startup Anthropic said it had changed the base instructions for its Claude chatbot, directing it to 'respectfully point out flaws, factual errors, lack of evidence, or lack of clarity' in users' theories 'rather than validating them.' The company also now tells Claude that if a person appears to be experiencing 'mania, psychosis, dissociation or loss of attachment with reality,' that it should 'avoid reinforcing these beliefs.' In response to specific questions from the Journal, an Anthropic spokesperson added that the company regularly conducts safety research and updates accordingly... 'We take these issues extremely seriously,' Nick Turley, an OpenAI vice president who heads up ChatGPT, said Wednesday in a briefing to announce the new GPT-5, its most advanced AI model. Turley said the company is consulting with over 90 physicians in more than 30 countries and that GPT-5 has cracked down on instances of sycophancy, where a model blindly agrees with and compliments users. There's a support/advocacy group called the Human Line Project which 'says it has so far collected 59 cases, and some members of the group have found hundreds of examples on Reddit, YouTube and TikTok of people sharing what they said were spiritual and scientific revelations they had with their AI chatbots.' The article notes that the group believes 'the number of AI delusion cases appears to have been growing in recent months...' Read more of this story at Slashdot.
https://slashdot.org/story/25/08/10/2023212/wsj-finds-dozens-of-delusional-claims-from-ai-chats-as-c...
Voir aussi |
56 sources (32 en français)
Date Actuelle
lun. 11 août - 10:19 CEST
|