Navigation
Recherche
|
Can AI Developers Be Held Liable for Negligence?
dimanche 29 septembre 2024, 05:34 , par Slashdot
Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems:
To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems... I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the 'developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm' (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate? The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them 'strong incentives to obtain liability insurance policies and to defend their employees against legal claims.') But AI developers could also be treated as practicing professionals (like physicians and attorneys). '{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies.' AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin. Thanks to long-time Slashdot reader david.emery for sharing the article. Read more of this story at Slashdot.
https://yro.slashdot.org/story/24/09/29/0122212/can-ai-developers-be-held-liable-for-negligence?utm_...
Voir aussi |
56 sources (32 en français)
Date Actuelle
dim. 22 déc. - 20:11 CET
|