MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
data
Recherche

A GRC framework for securing generative AI

mardi 19 novembre 2024, 10:00 , par InfoWorld
From automating workflows to unlocking new insights, generative AI models like OpenAI’s GPT-4 are already delivering value in enterprises across every industry. But with this power comes a critical challenge for organizations: How do they secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Generative AI solutions are popping up everywhere—embedded in platforms, integrated into products, and accessible via public tools.

In this article, we introduce a practical framework for categorizing and securing generative AI applications, giving businesses the clarity they need to govern AI interactions, mitigate risk, and stay compliant in today’s rapidly evolving technology landscape.

Types of AI applications and their impact on enterprise security

AI applications differ significantly in how they interact with data and integrate into enterprise environments, making categorization essential for organizations aiming to evaluate risk and enforce governance controls. Broadly, there are three main types of generative AI applications that enterprises need to focus on, each presenting unique challenges and considerations.

Web-based AI tools – Web-based AI products, such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, are widely accessible via the web and are often used by employees for tasks ranging from content generation to research and summarization. The open and public nature of these tools presents a significant risk: Data shared with them is processed outside the organization’s control, which can lead to the exposure of proprietary or sensitive information. A key question for enterprises is how to monitor and restrict access to these tools, and whether data being shared is adequately controlled. OpenAI’s enterprise features, for instance, provide some security measures for users, but these may not fully mitigate the risks associated with public models.

AI embedded in operating systems – Embedded AI products, such as Microsoft Copilot and the AI features within Google Workspace or Office 365, are tightly integrated into the systems employees already use daily. These embedded tools offer seamless access to AI-powered functionality without needing to switch platforms. However, deep integration poses a challenge for security, as it becomes difficult to delineate safe interactions from interactions that may expose sensitive data. The crucial consideration here is whether data processed by these AI tools adheres to data privacy laws, and what controls are in place to limit access to sensitive information. Microsoft’s Copilot security protocols offer some reassurance but require careful scrutiny in the context of enterprise use.

AI integrated into enterprise products – Integrated AI products, like Salesforce Einstein, Oracle AI, and IBM Watson, tend to be embedded within specialized software tailored for specific business functions, such as customer relationship management or supply chain management. While these proprietary AI models may reduce exposure compared to public tools, organizations still need to understand the data flows within these systems and the security measures in place. The focus here should be on whether the AI model is trained on generalized data or tailored specifically for the organization’s industry, and what guarantees are provided around data security. IBM Watson, for instance, outlines specific measures for securing AI-integrated enterprise products, but enterprises must remain vigilant in evaluating these claims.

Classifying AI applications for risk management

Understanding the three broad categories of AI applications is just the beginning. To effectively manage risk and governance, further classification is essential. By evaluating key characteristics such as the provider, hosting location, data flow, model type, and specificity, enterprises can build a more nuanced approach to securing AI interactions.

A crucial factor in this deeper classification is the provider of the AI model. Public AI models, like OpenAI’s GPT and Google’s Gemini, are accessible to everyone, but with this accessibility comes less control over data security and greater uncertainty around how sensitive information is handled. In contrast, private AI models, often integrated into enterprise solutions, offer more control and customization. However, these private models aren’t without risk. They must still be scrutinized for potential third-party vulnerabilities, as highlighted by PwC in their analysis of AI adoption across industries.

Another key aspect is the hosting location of the AI models—whether they are hosted on premises or in the cloud. Cloud-hosted models, while offering scalability and ease of access, introduce additional challenges around data residency, sovereignty, and compliance. Particularly when these models are hosted in jurisdictions with differing regulatory environments, enterprises need to ensure that their data governance strategies account for these variations. NIST’s AI Risk Management Framework provides valuable guidance on managing these hosting-related risks.

The data storage and flow of an AI application are equally critical considerations. Where the data is stored—whether in a general-purpose cloud or on a secure internal server—can significantly impact an organization’s ability to comply with regulations such as GDPR, CCPA, or industry-specific laws like HIPAA. Understanding the path that data takes from input to processing to storage is key to maintaining compliance and ensuring that sensitive information remains secure. The OECD AI Principles offer useful guidelines for maintaining strong data governance in the context of AI usage.

The model type also must be considered when assessing risk. Public models, such as GPT-4, are powerful but introduce a degree of uncertainty due to their general nature and the open-source nature of the data they are trained on. Private models, tailored specifically for enterprise use, may offer a higher level of control but still require robust monitoring to ensure security. OpenAI’s research on GPT-4, for instance, illustrates both the advancements and potential security challenges associated with public AI models.

Finally, model training has important risk implications. Distinguishing between generalized AI and industry-specific AI can help in assessing the level of inherent risk and regulatory compliance. Generalized AI models, like OpenAI’s GPT, are designed to handle a broad array of tasks, which can make it harder to predict how they will interact with specific types of sensitive data. On the other hand, industry-specific AI models, such as IBM Watson Health, are tailored to meet the particular needs and regulatory requirements of sectors like healthcare or financial services. While these specialized models may come with built-in compliance features, enterprises must still evaluate their suitability for all potential use cases and ensure that protections are comprehensive across the board.

Establishing a governance framework for AI interactions

Classifying AI applications is the foundation for creating a governance structure that ensures AI tools are used safely within an enterprise. Here are five key components to build into this governance framework:

Access control: Who in the organization can access different types of AI tools? This includes setting role-based access policies that limit the use of AI applications to authorized personnel.Reference: Microsoft Security Best Practices outline strategies for access control in AI environments.

Data sensitivity mapping: Align AI applications with data classification frameworks to ensure that sensitive data isn’t being fed into public AI models without the appropriate controls in place.Reference: GDPR Compliance Guidelines provide frameworks for data sensitivity mapping.

Regulatory compliance: Make sure the organization’s use of AI tools complies with industry-specific regulations (e.g., GDPR, HIPAA) as well as corporate data governance policies.Reference: OECD AI Principles offer guidelines for ensuring regulatory compliance in AI deployments.

Auditing and monitoring: Continual auditing of AI tool usage is essential for spotting unauthorized access or inappropriate data usage. Monitoring can help identify violations in real-time and allow for corrective action.Reference: NIST AI Risk Management Framework emphasizes the importance of auditing and monitoring in AI systems.

Incident response planning: Create incident response protocols specifically for AI-related data leaks or security incidents, ensuring rapid containment and investigation when issues arise.Reference: AI Incident Database provides examples and guidelines for responding to AI-related security incidents.

Example: Classifying OpenAI GPT and IBM Watson Health

Let’s classify OpenAI ChatGPT and IBM Watson Health for risk management according to the characteristics we outlined above.

ModelOpenAI GPTIBM Watson HealthProviderOpenAIIBMHosting locationCloud-hosted AI model (Azure)Cloud-hosted AI model (IBM Cloud)Data storage and flowExternal data processingInternal data processingModel typeGeneral public modelIndustry-specific public model (healthcare)Model trainingPublic knowledge, generalizedIndustry-specific model training

Now that we have the classifications, let’s overlay our governance framework.

ModelOpenAI ChatGPTIBM Watson HealthAccess controlChatGPT, being a general-purpose, cloud-hosted AI, must have strict access controls. Role-based access should restrict its use to employees working in non-sensitive areas (e.g., content creation, research). Employees handling sensitive or proprietary information should have limited access to prevent accidental data exposure.IBM Watson Health is a highly specialized AI model tailored for healthcare, so access must be limited to healthcare professionals or staff authorized to handle sensitive medical data (PHI). Fine-grained role-based access control should ensure only those with explicit needs can use Watson Health.Data sensitivity mappingChatGPT should be classified under “high-risk” for sensitive data processing due to its public, external data handling nature. Enterprises should map its use to less sensitive data (e.g., marketing or general information) and prevent interaction with customer PII or confidential business data.Because Watson Health is designed to handle sensitive data (e.g., patient records, PHI), it must align with healthcare-specific data classification systems. All data processed should be marked as “highly sensitive” under classification frameworks like HIPAA and stringent safeguards must be in place.Regulatory complianceChatGPT may struggle to meet strict regulatory standards like GDPR or HIPAA, as it’s not inherently compliant for handling highly sensitive or regulated data. Organizations must ensure that employees do not feed it information governed by strict data privacy laws.Watson Health is designed to comply with industry regulations like HIPAA for healthcare and HITRUST for data security. However, enterprises still need to ensure that their specific deployment configurations are aligned with these standards, particularly regarding how data is stored and processed.Auditing and monitoringContinuous monitoring of interactions with ChatGPT is crucial, especially to track the data that employees share with the model. Logging all interactions can help identify policy violations or risky data-sharing practices.Given its role in handling sensitive healthcare data, Watson Health requires continuous, real-time auditing and monitoring to detect potential unauthorized access or data breaches. Logs must be securely stored and routinely reviewed for compliance violations.Incident response planningGiven ChatGPT’s general-purpose nature and external hosting, a specific incident response plan should be developed to address potential data leaks or unauthorized use of the model. If sensitive information is mistakenly shared, the incident must be investigated swiftly.In case of a data breach or PHI exposure, Watson Health must have a healthcare-specific incident response plan. Rapid containment, remediation, and reporting (including notifications under HIPAA’s breach notification rule) are critical.

Reducing AI risks through AI governance

As AI technology advances, it brings both transformative opportunities and unprecedented risks. For enterprises, the challenge is no longer whether to adopt AI, but how to govern AI responsibly, balancing innovation against security, privacy, and regulatory compliance.

By systematically categorizing generative AI applications—evaluating the provider, hosting environment, data flow, and industry specificity—organizations can build a tailored governance framework that strengthens their defenses against AI-related vulnerabilities. This structured approach enables enterprises to anticipate risks, enforce robust access controls, protect sensitive data, and maintain regulatory compliance across global jurisdictions.

The future of enterprise AI is about more than just deploying the latest models; it’s about embedding AI governance deeply into the fabric of the organization. Enterprises that take a proactive, comprehensive approach will not only safeguard their business against potential threats but also unlock AI’s full potential to drive innovation, efficiency, and competitive advantage in a secure and compliant manner.

Trevor Welsh is VP of products at WitnessAI.



Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
https://www.infoworld.com/article/3604732/a-grc-framework-for-securing-generative-ai.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 21 nov. - 10:17 CET