Navigation
Recherche
|
Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions
mardi 3 juin 2025, 01:20 , par Slashdot
![]() The moderator said that it has banned 'over 100' people for this reason already, and that they've seen an 'uptick' in this type of user this month. The moderator explains that r/accelerate 'was formed to basically be r/singularity without the decels.' r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. 'Decels' is short for the pejorative 'decelerationists,' who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a 'pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.' The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about 'Chatgpt induced psychosis.' From someone saying their partner is convinced he created the 'first truly recursive AI' with ChatGPT that is giving them 'the answers' to the universe. The moderator update on r/accelerate refers to another post on r/ChatGPT which claims '1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.' The author of that post said they noticed a spike in websites, blogs, Githubs, and 'scientific papers' that 'are very obvious psychobabble,' and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. 'Ironically, the OP post appears to be falling for the same issue as well,' the r/accelerate moderator wrote. 'Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,' an r/accelerate moderator told 404 Media. 'The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.' Moderators of the subreddit often cite the term 'Neural Howlround' to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles. Read more of this story at Slashdot.
https://tech.slashdot.org/story/25/06/02/2156253/pro-ai-subreddit-bans-uptick-of-users-who-suffer-fr...
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 5 juin - 16:44 CEST
|