MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
security
Recherche

The rising threat of shadow AI

vendredi 28 février 2025, 10:00 , par InfoWorld
Employees in a large financial organization began developing AI tools to automate time-consuming tasks such as weekly report generation. They didn’t think about what could go wrong. Within a few months, unauthorized applications skyrocketed from just a couple to 65. The kicker is that all these AI tooks are training on sensitive corporate data, even personally identifiable information.

One team used a shadow AI solution built on ChatGPT to streamline complex data visualizations. This inadvertently exposed the company’s intellectual property to public models. Of course, compliance officers raised alarms about potential data breaches and regulatory violations. (How come these guys don’t prevent this stuff but show up after it’s happened?)

The company’s leadership realized the critical need for centralized AI governance. They conducted a comprehensive audit and established an Office of Responsible AI aimed at mitigating risks while allowing employees to leverage sanctioned AI tools. Perhaps too little too late?

Stay out of the shadows

Cloud security administrators are increasingly grappling with the rise of shadow AI. Employees, driven by the pressures of demanding workloads and tight deadlines, utilize AI applications without IT approval or oversight. The security implications are profound and challenging. Shadow AI represents a fundamental challenge to our carefully constructed security perimeters. Enterprises have developed somewhat restrictive policies around the use of AI since the emergence of generative AI and ChatGPT. As you may have guessed, this results in a chaotic jumble of applications that can lead to significant security risks.

According to recent findings, more than 12,000 such apps have already been identified, and 50 new applications pop up daily. Disturbingly, many of these tools bypass established security protocols. I would suspect that security admins are not aware of most of them and in many instances, never will be. The security audits I attend often conclude that about 75% of threats are missed. Given that these shadow AI applications often run on or around cloud systems, the issues become more of a problem. Cloud deployments are much more complex, considering the exposure is also directed at the cloud provider or providers.

What to do?

These unauthorized applications open up critical risks that even the most educated security admins don’t yet understand. First and foremost is the undeniable threat of data breaches. When employees input sensitive company information into unvetted AI applications, they inadvertently expose this data to potential leaks. AI applications alone are not always the bad actors; the data is often transmitted to remote servers, in many instances outside of the country.

Another risk is that many shadow AI tools, such as those utilizing OpenAI’s ChatGPT or Google’s Gemini, default to training on any data provided. This means proprietary or sensitive data could already mingle with public domain models. Moreover, shadow AI apps can lead to compliance violations. It’s crucial for organizations to maintain stringent control over where and how their data is used. Regulatory frameworks not only impose strict requirements but also serve to protect sensitive data that could harm an organization’s reputation if mishandled.

Cloud computing security admins are aware of these risks. However, the tools available to combat shadow AI are grossly inadequate. Traditional security frameworks are ill-equipped to deal with the rapid and spontaneous nature of unauthorized AI application deployment. The AI applications are changing, which changes the threat vectors, which means the tools can’t get a fix on the variety of threats.

Getting your workforce on board

Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Don’t forbid AI—teach people how to use it safely. Indeed, the “ban all tools” approach never works; it lowers morale, causes turnover, and may even create legal or HR issues.

The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, we’re creating AI cops. However, don’t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired. 

Shadow AI is yet another buzzword to track, but also it’s undeniably a growing problem for cloud computing security administrators. The lack of adequate defenses against these unauthorized applications is a pressing concern. However, organizations can navigate this new landscape with centralized governance, education, and proactive monitoring while reaping AI technologies’ benefits. We need to be smart with this one.
https://www.infoworld.com/article/3835067/the-rising-threat-of-shadow-ai.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
ven. 28 févr. - 23:37 CET