Navigation
Recherche
|
OpenAI's AI Reasoning Model 'Thinks' In Chinese Sometimes, No One Really Knows Why
mercredi 15 janvier 2025, 01:45 , par Slashdot
'[Labs like] OpenAI and Anthropic utilize [third-party] data labeling services for PhD-level reasoning data for science, math, and coding,' Xiao wrote in a post on X. '[F]or expert labor availability and cost reasons, many of these data providers are based in China.' Other experts don't buy the o1 Chinese data labeling hypothesis, however. They point out that o1 is just as likely to switch to Hindi, Thai, or a language other than Chinese while teasing out a solution. Other experts don't buy the o1 Chinese data labeling hypothesis, however. They point out that o1 is just as likely to switch to Hindi, Thai, or a language other than Chinese while teasing out a solution. Rather, these experts say, o1 and other reasoning models might simply be using languages they find most efficient to achieve an objective (or hallucinating). 'The model doesn't know what language is, or that languages are different,' Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch. 'It's all just text to it.' Tiezhen Wang, a software engineer at AI startup Hugging Face, agrees with Guzdial that reasoning models' language inconsistencies may be explained by associations the models made during training. 'By embracing every linguistic nuance, we expand the model's worldview and allow it to learn from the full spectrum of human knowledge,' Wang wrote in a post on X. 'For example, I prefer doing math in Chinese because each digit is just one syllable, which makes calculations crisp and efficient. But when it comes to topics like unconscious bias, I automatically switch to English, mainly because that's where I first learned and absorbed those ideas.' Luca Soldaini, a research scientist at the nonprofit Allen Institute for AI, cautioned that we can't know for certain. 'This type of observation on a deployed AI system is impossible to back up due to how opaque these models are,' they told TechCrunch. 'It's one of the many cases for why transparency in how AI systems are built is fundamental.' Read more of this story at Slashdot.
https://slashdot.org/story/25/01/14/239246/openais-ai-reasoning-model-thinks-in-chinese-sometimes-no...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 15 janv. - 09:33 CET
|