MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
openai
Recherche

Sam Altman exits OpenAI commission for AI safety to create ‘independent’ oversight

mercredi 18 septembre 2024, 04:12 , par ComputerWorld
OpenAI’s CEO Sam Altman has stepped away from his role as co-director of an internal commission the company created in May to oversee key safety and security decisions related to OpenAI’s artificial intelligence (AI) model development and deployment.

OpenAI’s Safety and Security Committee will become “an independent board oversight committee focused on safety and security” led by its new chair, Zico Kolter, director of the machine learning department of Carnegie Mellon University’s School of Computer Science, the company revealed in a blog post Monday. Kolter replaces the committee’s former chair, Bret Taylor, who also has departed.

Other members of the committee, which is chiefly aimed at overseeing the safety and security processes guiding OpenAI’s model development and deployment, remain: Adam D’Angelo, Quora co-founder and CEO; retired US Army General Paul Nakasone; and Nicole Seligman, former EVP and general counsel at Sony Corporation.

It was this committee, under Kolter’s leadership, that reviewed the safety and security criteria that OpenAI used to assess the “fitness” of OpenAI o1 for launch, as well as the results of safety evaluations for the model, according to the post. OpenAI o1 is the company’s latest family of large language models (LLMs) and introduces advanced reasoning that the company said exceeds that of human PhDs on a benchmark of physics, chemistry, and biology problems, and even ranks highly in math and coding.

More transparency, collaboration and monitoring on tap

OpenAI shared recommendations for the committee’s mission going forward: to establish independent governance for AI safety and security; enhance security measures; foster transparency about OpenAI’s work; collaborate with external organizations; and unify the company’s safety frameworks for model development and monitoring.

“We’re committed to continuously improving our approach to releasing highly capable and safe models, and value the crucial role the Safety and Security Committee will play in shaping OpenAI’s future,” said the post.

Indeed, AI safety, and OpenAI’s management of it in particular, is something that has become of great concern to various industry stakeholders and lawmakers.

Altman became a controversial figure soon after forming OpenAI, and his abrupt ousting from and subsequent return to the company late last year, and the behind-the-scenes deal making and shake-ups that occurred in the aftermath quickly led to infamy for the CEO, who has become a public face of AI.

Highlights in that journey included OpenAI securing a $13 billion investment from Microsoft, which uses OpenAI technology for its generative AI tool, Copilot, and breaking ideologically from Tesla’s Elon Musk, a controversial figure in his own right, who was one of OpenAI’s founding board members and investors. Musk ultimately sued OpenAI and Altman for breaching its founding mission.

The safety of OpenAI’s technology also has been called into question under Altman, after reports surfaced that the company allegedly used illegal non-disclosure agreements and required employees to reveal whether they had been in contact with authorities, as a way for it to cover up any security issues related to AI development.

Effect of the move as yet unknown

It remains to be seen what, if any, impact Altman’s stepping back from OpenAI’s safety board will have on AI governance, which is still in its infancy, noted Abhishek Sengupta, practice director at Everest Group.

However, it appears to be a sign that the company recognizes “the importance of neutrality in AI governance efforts,” and could be willing to be more open about how it is managing AI security and safety risks, he told Computerworld.

“While the need to innovate fast has strained governance for AI, increasing government scrutiny and the risk of public blowback is gradually bringing it back into focus,” Sengupta said. “It is likely that we will increasingly see independent third parties involved in AI governance and audit.”
https://www.computerworld.com/article/3526546/sam-altman-exits-openai-commission-for-ai-safety-to-cr...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
sam. 21 sept. - 05:17 CEST