Navigation
Recherche
|
Is 'AI Welfare' the New Frontier In Ethics?
lundi 11 novembre 2024, 22:13 , par Slashdot
'To be clear, our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant,' the paper reads. 'Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.' The paper outlines three steps that AI companies or other industry players can take to address these concerns. Companies should acknowledge AI welfare as an 'important and difficult issue' while ensuring their AI models reflect this in their outputs. The authors also recommend companies begin evaluating AI systems for signs of consciousness and 'robust agency.' Finally, they call for the development of policies and procedures to treat AI systems with 'an appropriate level of moral concern.' The researchers propose that companies could adapt the 'marker method' that some researchers use to assess consciousness in animals -- looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. While the researchers behind 'Taking AI Welfare Seriously' worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don't actually need moral consideration. 'One problem with the concept of AI welfare stems from a simple question: How can we determine if an AI model is truly suffering or is even sentient?' writes Ars' Benj Edwards. 'As mentioned above, the authors of the paper take stabs at the definition based on 'markers' proposed by biological researchers, but it's difficult to scientifically quantify a subjective experience.' Fish told Transformer: 'We don't have clear, settled takes about the core philosophical questions, or any of these practical questions. But I think this could be possibly of great importance down the line, and so we're trying to make some initial progress.' Read more of this story at Slashdot.
https://slashdot.org/story/24/11/11/2112231/is-ai-welfare-the-new-frontier-in-ethics?utm_source=rss1...
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 21 nov. - 12:38 CET
|