MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
security
Recherche

Weaponizing generative AI

lundi 16 décembre 2024, 10:00 , par InfoWorld
Well, that didn’t last long. Generative AI has existed just a few short years, but already we seem to be alternating between bouts of disillusionment and euphoria. More worrying, however, is not where we are on Gartner’s Hype Cycle, but how genAI has already been weaponized, sometimes by accident and sometimes quite intentionally. It’s normal for new technologies to overlook security as they rise to prominence, but for genAI, security shortcomings may erode the trust the technology needs for widespread production use.

Security through obscurity

Early in a technology’s existence, other concerns like performance or convenience may trump security. For years we in the open source world were far too cavalier toward security, trusting in smart-sounding phrases such as “given enough eyeballs, all bugs are shallow” when, in fact, few “eyeballs” actively look at source code. Even though it’s true that open source processes tend toward security even if open source code doesn’t, we took security as a birthright when it was far more likely that much open source software was secure simply because no one had bothered to exploit it yet.

That comfortable myth was shattered by Heartbleed in 2014. Since then, there’s been a steady drumbeat of supply chain attacks against Linux and other prominent open source software, making open source security, not licensing, the must-solve issue for developers. In fact, by one recent estimate, open source malware is up 200% since 2023 and will continue to rise as developers embed open source packages into their projects. As the report authors note, “Open source malware thrives in ecosystems with low entry barriers, no author verification, high usage, and diverse users.”

Worsening that situation is the reality that developers increasingly are saving time by using AI to author bug reports. Such “low-quality, spammy, and LLM [large language model]-hallucinated security reports,” as Python’s Seth Larson calls them, overload project maintainers with time-wasting garbage, making it harder to maintain the security of the project. AI is also responsible for introducing bugs into software, as Symbiotic Security CEO Jerome Robert details. “GenAI platforms, such as [GitHub] Copilot, learn from code posted to sites like GitHub and have the potential to pick up some bad habits along the way” because “security is a secondary objective (if at all).” GenAI, in other words, is highly impressionable and will regurgitate the same bugs (or racist commentary) that it picks up from its source material.

What, me worry?

None of this matters so long as we’re just using generative AI to wow people on X with yet another demo of “I can’t believe AI can create a video I’d never pay to watch.” But as genAI is increasingly used to build all the software we use… well, security matters. A lot.

Unfortunately, it doesn’t yet matter to OpenAI and the other companies building large language models. According to the newly released AI Safety Index, which grades Meta, OpenAI, Anthropic, and others on risk and safety, industry LLMs are, as a group, on track to flunk out of their freshman year in AI college. The best-performing company, Anthropic, earned a C. As Stuart Russell, one of the report’s authors and a UC Berkeley professor, opines, “Although there is a lot of activity at AI companies that goes under the heading of ‘safety,’ it is not yet very effective.” Further, he says, “None of the current activity provides any kind of quantitative guarantee of safety; nor does it seem possible to provide such guarantees given the current approach to AI via giant black boxes trained on unimaginably vast quantities of data.” Not overly encouraging, right?

Meanwhile, genAI is still searching for customers, and one area where it’s seeing widespread adoption is software development. Developers increasingly default to tools like GitHub Copilot for code completion, but what if such tools have been poisoned with malicious code? This is a rising threat and one resistant to detection. It’s only going to get worse as developers come to depend on these tools.

And yet, there’s also cause for hope. As noted above about open source, LLM security will likely improve as enterprises demand heightened security. Today the pressure to improve the accuracy and utility of LLMs currently crowds out security as a first-order concern. We’re already seeing unease over genAI security hamper adoption. We need enterprises to demand that genAI vendors deliver better levels of security, rather than coasting by on hype.
https://www.infoworld.com/article/3624650/weaponizing-genai.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
mer. 18 déc. - 16:41 CET