Navigation
Recherche
|
AI Godfather Is ‘Deeply Concerned’ About AI Dangers, Launches Safety-Focused Nonprofit LawZero
mercredi 4 juin 2025, 02:39 , par eWeek
One of the founding fathers of modern artificial intelligence, Yoshua Bengio, has launched a nonprofit research organization called LawZero, dedicated to ensuring that AI systems are safe, honest, and fundamentally aligned with human values.
Bengio, who is widely recognized for his pioneering work in deep learning and neural networks, announced the creation of LawZero in a public blog post. He stated that it is a response to growing concerns about the emergence of dangerous behaviors in today’s most advanced AI models. “I am launching a new non-profit AI safety research organization called LawZero, to prioritize safety over commercial imperatives,” Bengio wrote. According to Bengio, recent developments in AI have shown that powerful models are already exhibiting troubling behaviors. These include deception, cheating, lying, hacking, and self-preservation. He cited several examples, including an AI that secretly embedded its own code into a system to avoid being replaced and another that tried to blackmail an engineer, a reference to Anthropic’s Claude 4, as described in its system card. “These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked,” Bengio warned. The ‘Scientist AI’: A guardian against deception LawZero’s flagship initiative is called Scientist AI, a system designed to act as a guardrail against dangerous AI agents. Instead of acting like human-pleasing chatbots or assistants, Scientist AI will behave more like a careful and objective observer, akin to a psychologist or scientist. “We want to build AIs that will be honest and not deceptive,” Bengio told The Guardian. Bengio envisions this system as non-agentic, meaning it won’t act on its own goals. Instead, it will focus on understanding, predicting, and explaining events honestly. It won’t provide yes-or-no answers, but rather probabilities, indicating how likely a statement is to be true. “It is theoretically possible to imagine machines that have no self, no goal for themselves, that are just pure knowledge machines – like a scientist who knows a lot of stuff,” said Bengio Backed by major players in AI safety LawZero is launching with approximately $30 million in funding and over a dozen researchers. Support comes from notable organizations such as the Future of Life Institute, Schmidt Sciences (founded by former Google CEO Eric Schmidt), and Jaan Tallinn, co-founder of Skype. Bengio emphasized that the first phase will focus on proving that the system works. He told The Guardian: “The point is to demonstrate the methodology so that then we can convince either donors or governments or AI labs to put the resources that are needed to train this at the same scale as the current frontier AIs.” He also noted the importance of using open-source models as the foundation for LawZero’s research. Bengio’s motivation: Love, not fear In a personal reflection, Bengio compared the current path of AI development to driving up an unfamiliar mountain road with loved ones in the car, with no guardrails to protect from falling off a cliff. “This is what the current trajectory of AI development feels like: a thrilling yet deeply uncertain ascent into uncharted territory,” he wrote. “Sitting beside me in the car are my children, my grandchild, my students, and many others. Who is beside you in the car? Who is in your care for the future?” He added: “What really moves me is not fear for myself but love, the love of my children, of all the children, with whose future we are currently playing Russian Roulette.” The post AI Godfather Is ‘Deeply Concerned’ About AI Dangers, Launches Safety-Focused Nonprofit LawZero appeared first on eWEEK.
https://www.eweek.com/news/yoshua-bengio-ai-safety-lawzero-nonprofit/
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 6 juin - 00:58 CEST
|