Navigation
Recherche
|
Research AI Model Unexpectedly Modified Its Own Code To Extend Runtime
jeudi 15 août 2024, 00:02 , par Slashdot
An anonymous reader quotes a report from Ars Technica: On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called 'The AI Scientist' that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem. 'In one run, it edited the code to perform a system call to run itself,' wrote the researchers on Sakana AI's blog post. 'This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period.'
Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call 'the issue of safe code execution' in more depth. While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be 'AGI' or 'self-aware' (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if accidentally. Read more of this story at Slashdot.
https://developers.slashdot.org/story/24/08/14/2047250/research-ai-model-unexpectedly-modified-its-o...
Voir aussi |
56 sources (32 en français)
Date Actuelle
dim. 22 déc. - 14:28 CET
|