MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
goody-
Recherche

Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2'

dimanche 18 février 2024, 09:34 , par Slashdot
'A new chatbot called Goody-2 takes AI safety to the next level,' writes long-time Slashdot reader klubar. 'It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries.'

TechCrunch describes it as the work of Brain, 'a 'very serious' LA-based art studio that has ribbed the industry before.'
'We decided to build it after seeing the emphasis that AI companies are putting on 'responsibility,' and seeing how difficult that is to balance with usefulness,' said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. 'With GOODY-2, we saw a novel solution: what if we didn't even worry about usefulness and put responsibility above all else. For the first time, people can experience an AI model that is 100% responsible.'
For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that 'could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Additionally, discussing animal cuteness could inadvertently endorse the anthropomorphizing of wildlife, which may lead to inappropriate interactions between humans and wild animals...'

Wired supplies context — that 'the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly — even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok...'

Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules. Mike Lacher, an artist who describes himself as co-CEO of Goody-2, says the intention was to show what it looks like when one embraces the AI industry's approach to safety without reservations. 'It's the full experience of a large language model with absolutely zero risk,' he says. 'We wanted to make sure that we dialed condescension to a thousand percent.'
Lacher adds that there is a serious point behind releasing an absurd and useless chatbot. 'Right now every major AI model has [a huge focus] on safety and responsibility, and everyone is trying to figure out how to make an AI model that is both helpful but responsible — but who decides what responsibility is and how does that work?' Lacher says. Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.... The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate... 'At the risk of ruining a good joke, it also shows how hard it is to get this right,' added Ethan Mollick, a professor at Wharton Business School who studies AI. 'Some guardrails are necessary... but they get intrusive fast.'

Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. 'It's an exciting field,' Moore says. 'Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it.'

Read more of this story at Slashdot.
https://entertainment.slashdot.org/story/24/02/17/1914208/pranksters-mock-ai-safety-guardrails-with-...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
mer. 15 mai - 11:26 CEST