Navigation
Recherche
|
OpenAI whistleblowers seek SEC probe into ‘restrictive’ NDAs with staffers
lundi 15 juillet 2024, 13:47 , par ComputerWorld
Some employees of ChatGPT-maker OpenAI have reportedly written to the US Securities and Exchange Commission (SEC) seeking a probe into some employee agreements, which they term restrictive non-disclosure agreements (NDAs).
These staffers-turned-whistleblowers have written to the SEC alleging that the company forced their employees to sign agreements that were not in compliance with SEC’s regulations. “Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” read the letter shared with Reuters by the office of Senator Chuck Grassley. The same letter alleges that OpenAI made employees sign agreements that curb their federal rights to whistleblower compensation and urges the financial watchdog to impose individual penalties for each such agreement signed. Further, the whistleblowers have alleged that OpenAI’s agreements with employees restricted them from making any disclosure to authorities without checking with the management first and any failure to comply with these agreements would attract penalties for the staffers. The company, according to the letter, also did not create any separate or specific exemptions in the employee non-disparagement clauses for disclosing securities violations to the SEC. An email sent to OpenAI about the letter went unanswered. The Senator’s office also cast doubt about the practices at OpenAI. “OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” the Senator was quoted as saying. Experts in the field of AI have been warning against the use of the technology without proper guidelines and regulations. In May, more than 150 leading artificial intelligence (AI) researchers, ethicists, and others signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems to maintain basic protection against the risks of using large-scale AI. Last April, the who’s who of the technology industry called for AI labs to stop training the most powerful systems for at least six months, citing “profound risks to society and humanity.” That open letter, which now has more than 3,100 signatories including Apple co-founder Steve Wozniak, called out San Francisco-based OpenAI Lab’s recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards were in place. OpenAI, on the other hand, in May formed a safety and security committee led by board members as they started researching their next large language models. More OpenAI news: OpenAI is working on new reasoning AI technology OpenAI reportedly stopped staffers from warning about security risks OpenAI: Musk’s control issues at heart of ongoing rift OpenAI models still available in China via Azure cloud despite company ban
https://www.computerworld.com/article/2517435/openai-whistleblowers-seek-sec-probe-into-restrictive-...
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 21 nov. - 22:16 CET
|