MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
models
Recherche

White House opts to not add regulatory restrictions on AI development – for now

mardi 30 juillet 2024, 22:15 , par InfoWorld
The Biden Administration on Tuesday issued an AI report in which it said it would not be “immediately restricting the wide availability of open model weights [numerical parameters that help determine a model’s response to inputs] in the largest AI systems,” but it stressed that it might change that position at an unspecified point.

The report, which was officially released by the US Department of Commerce’s National Telecommunications and Information Administration (NTIA), focused extensively on the pros and cons of a dual-use foundation model, which it defined as an AI model that “is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

The wide availability of AI models “could pose a range of marginal risks and benefits. But models are evolving too rapidly, and extrapolation based on current capabilities and limitations is too difficult, to conclude whether open foundation models pose more marginal risks than benefits,” the report said.

“For instance,” it said, “how much do open model weights lower the barrier to entry for the synthesis, dissemination, and use of CBRN (chemical, biological, radiological, or nuclear) material? Do open model weights propel safety research more than they introduce new misuse or control risks? Do they bolster offensive cyber attacks more than propel cyber defense research? Do they enable more discrimination in downstream systems than they promote bias research? And how do we weigh these considerations against the introduction and dissemination of CSAM (child sexual abuse material)/NCII (non-consensual intimate imagery) content?”

Mixed reactions

Industry executives had mixed reactions to the news, applauding the lack of immediate restrictions but expressing worries that the report didn’t rule out any such restrictions in the near term.

Yashin Manraj, the CEO at Oregon-based Pvotal, said that there were extensive industry fears before the final report was published that the US was going to try and restrict AI development in some way. There was also talk within the investment community that AI development operations might have to relocate outside of the US had regulations been announced. Pvotal operates in nine countries.

“VCs are no longer breathing down our necks” to relocate AI development to more AI-friendly environments such as Dubai, Manraj said, but he would have preferred to have a seen a more long term promise of a lack of additional regulation.

“It was the right step to not implement any sort of enforceable action in the short term, but there is no clear and definite promise. We don’t know what will happen in three months,” Manraj said. “At least we don’t have to make any drastic changes right now, but there is a little bit of worry about how things will go. It would have been nice to have had that clarity.”

Another AI executive, Hamza Tahir, CTO of ZenML, said, “the report did a good job of acknowledging the dangers that AI might cause, while erring on the side of non-regulation  and openness. It was a prudent and rational response, a sensible approach. They don’t have the expertise right now.”

Issues for developers

The report itself focused on the level of control that coders have when developing generative AI models. 

“Developers who publicly release model weights give up control over and visibility into its end users’ actions. They cannot rescind access to the weights or perform moderation on model usage. Although the weights could be removed from distribution platforms, such as Hugging Face, once users have downloaded the weights, they can share them through other means,” it said. 

The report noted that dual-use foundation models can be beneficial, in that they “diversify and expand the array of actors, including less resourced actors, that participate in AI R&D. They decentralize AI market control from a few large AI developers. And they enable users to leverage models without sharing data with third parties, increasing confidentiality and data protection.”

Why no new regulations?

One of the reasons the report cited for not, initially at least, imposing any new regulatory burdens on AI development is that research to date has simply not been very conclusive, since it was conducted on already-released models. 

“Evidence from this research provides a baseline against which to measure marginal risks and benefits, but cannot preemptively measure the risks and benefits introduced by the wide release of a future model,” the report said. “It can provide relatively little support for the marginal risks and benefits of future releases of dual-use foundation models with widely available model weights. Without changes in research and monitoring capabilities, this dynamic may persist. Any evidence of risks that would justify possible policy interventions to restrict the availability of model weights might arise only after those AI models, closed or open, have been released.”

It added that many AI models with widely available model weights have fewer than 10 billion parameters, so were outside of the report’s scope as defined in the 2023 Executive Order.

“Advances in model architecture or training techniques can lead to models which previously required more than 10 billion parameters to be matched in capabilities and performance by newer models with fewer than 10 billion parameters,” the report noted. “Further, as science progresses, it is possible that this dynamic will accelerate, with the number of parameters required for advanced capabilities steadily decreasing.”

The report also warned that such models “could plausibly exacerbate the risks AI models pose to public safety by allowing a wider range of actors, including irresponsible and malicious users, to leverage the existing capabilities of these models and augment them to create more dangerous systems. For instance, even if the original model has built-in safeguards to prohibit certain prompts that may harm public safety, such as content filters, blocklists and prompt shields, direct model weight access can allow individuals to strip these safety features.”

Threats to public safety

It also awakened readers with a subhead that read: “Chemical, Biological, Radiological or Nuclear Threats to Public Safety.” 

That section noted that biological design tools (BDTs) “exceeding the parameter threshold are just now beginning to appear. Sufficiently capable BDTs of any scale should be discussed alongside dual-use foundation models because of their potential risk for biological and chemical weapon creation.” 

“Some experts have argued that the indiscriminate and untraceable distribution unique to open model weights creates the potential for enabling chemical biological, radiological, or nuclear (CBRN) activity amongst bad actors, especially as foundation models increase their multi-modal capabilities and become better lab assistants,” the report said. 

International implications

The report also cautioned against other countries taking their own actions, especially if those rules contradict other regions’ rules.

“Inconsistencies in approaches to model openness may also divide the internet into digital silos, causing a ‘splinter-net’ scenario. If one state decides to prohibit open model weights but others, such as the United States, do not, the restrictive nations must, in some way, prevent their citizens from accessing models published elsewhere,” the report said. “Since developers usually publish open model weights online, countries that choose to implement stricter measures will have to restrict certain websites, as some countries’ websites would host open models and others would not.”

It said that there are special concerns about nations unfriendly to US policy.

“Actors could experiment with foundation models to advance R&D for myriad military and intelligence applications, including signal detection, target recognition, data processing, strategic decision making, combat simulation, transportation, signal jams, weapon coordination systems, and drone swarms,” the report said. “Open models could potentially further these research initiatives, allowing foreign actors to innovate on U.S. models and discover crucial technical knowledge for building dual-use models.”
https://www.infoworld.com/article/3479103/white-house-opts-to-not-add-regulatory-restrictions-on-ai-...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 21 nov. - 17:37 CET