|
Navigation
Recherche
|
Researchers Surprised That With AI, Toxicity is Harder To Fake Than Intelligence
mercredi 12 novembre 2025, 15:02 , par Slashdot
Researchers from four universities have released a study revealing that AI models remain easily detectable in social media conversations despite optimization attempts. The team tested nine language models across Twitter/X, Bluesky and Reddit, developing classifiers that identified AI-generated replies at 70 to 80% accuracy rates. Overly polite emotional tone served as the most persistent indicator. The models consistently produced lower toxicity scores than authentic human posts across all three platforms.
Instruction-tuned models performed worse than their base counterparts at mimicking humans, and the 70-billion-parameter Llama 3.1 showed no advantage over smaller 8-billion-parameter versions. The researchers found a fundamental tension: models optimized to avoid detection strayed further from actual human responses semantically. Read more of this story at Slashdot.
https://tech.slashdot.org/story/25/11/12/142219/researchers-surprised-that-with-ai-toxicity-is-harde...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mar. 23 déc. - 20:55 CET
|








