MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
risk
Recherche

Meta promises it won’t release dangerous AI systems

mardi 4 février 2025, 19:26 , par ComputerWorld
According to a new policy document from Meta, the Frontier AI Framework, the company might not release AI systems developed in-house in certain risky scenarios.

The document defines two types of AI systems that can be classified as either “high risk” or “critical risk.” In both cases, these are systems that could help carry out cyber, chemical or biological attacks.

Systems classified as “high risk” might facilitate such an attack, though not to the same extent as a “critical risk” system, which could result in catastrophic outcomes. These could include, for example, taking over a corporate environment or deploying powerful biological weapons.

In the document, Meta states that if a system is “high risk,” the company will restrict internal access to it and will not release it until measures have been taken to reduce the risk to “moderate levels.” If, instead, the system is “critical risk,” security protections will be put in place to prevent it from spreading and development will stop until the system can be made safer.
https://www.computerworld.com/article/3816687/meta-promises-not-to-release-dangerous-ai-systems.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mer. 5 févr. - 09:55 CET