MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
generative
Recherche

Risks of Generative AI: 6 Risk Management Tips

mardi 15 août 2023, 23:37 , par eWeek
Generative AI models are powerful tools that data scientists are using to create powerful results, from content creation to improved data analytics performance. Yet it’s becoming apparent that many users are not aware of the risks of using generative AI.
As more users incorporate generative AI into their daily workflows without doing their due diligence, the potential challenges and consequences of using these AI models are beginning to outweigh the pros. This is particularly true for enterprise users that need transparent data and security practices to protect customer data and comply with industry-specific regulations.
In this guide, we’ll briefly discuss some of the risks of using generative AI. Also important, we’ll walk through some of the best ways your organization can manage and mitigate generative AI risk.
Table of Contents: Generative AI Risks and Risk Management

What Are the Risks Associated With Generative AI?
Generative AI Risk Management: Tips and Best Practices
Bottom Line: Mitigating Generative AI Risks With the Right Tools and Processes

What Are the Risks Associated With Generative AI?
Generative AI models are massive artificial intelligence tools that rely on large quantities of third-party data, neural network architecture, and complex algorithms to generate original content.
These qualities make generative models capable of human-like problem-solving tasks. But similar to a human brain, not all generative AI model decisions and outputs are easy to dissect. The AI models’ lack of transparency is one of many generative AI risks that users must consider in the context of their work.
Other important risks of generative AI include the following:

The possibility that an AI model has used copyrighted data or creations without a creator’s consent; even when users are unaware of this violation, if they make commercial use of unauthorized content, they could be held liable.
The possibility of using sensitive consumer data and that data becoming part of the model’s ongoing training dataset.
A model’s training data could be biased, incomplete, or otherwise inaccurate.
Models may not have as many built-in safeguards as your organization needs to protect data and comply with regulations.
There’s always the chance an employee could unknowingly misuse a model or sensitive data, exposing intellectual property and other sensitive company information.
Generative models occasionally hallucinate, which means they confidently generate content that is inaccurate.
Models frequently store data for an extended period of time, which increases datasets’ chances of being exposed to a cyberattack.
Many models have little to no cybersecurity tooling in place natively; additionally, generative AI models can be tricked into solving security puzzles for malicious actors.
Generative models’ lack of transparency makes it difficult to quality-test and audit model data and usage.
Generative AI models are subject to few regulations, though several countries have AI legislation in the works; in general, generative AI vendors are not being held to the same consumer-protection standards as many other businesses.

More on a similar topic: Generative AI Ethics: Concerns and Solutions
Generative AI Risk Management: 6 Best Practices
1) Create and Enforce an AI Use Policy in Your Organization
Whether you’re already using generative AI across your organization or are simply considering its benefits, it’s a good idea to establish an acceptable use AI policy for your organization. This policy should explain:

Which departments and roles are allowed to use generative models as part of their work.
What parts of the workflow can be automated or supplemented with generative AI.
What internal data and apps are and are not allowed to be exposed to these models.

Establishing these policies gives your employees a better framework for what they can and cannot do with generative AI and also gives your leadership a clearer understanding of what behaviors they should be monitoring and correcting. If you’re looking for an existing policy to model your own policy after, NIST’s Artificial Intelligence Risk Management Framework is a great option.
Also see: Top Generative AI Apps and Tools
2) Use First-Party Data and Responsibly Source Third-Party Data
When using a generative model in an enterprise setting, you should always know where your input data is coming from. Whenever possible, use first-party data that your company owns; this will help you keep track of your data’s origins and determine if that data is safe to use in a generative model. If it becomes necessary for you to use data that your company does not own, make sure you are using credible third-party sources with permission. This will increase the likelihood of you using high-quality data and steering clear of lawsuits for unauthorized data use.
Beyond the data your organization chooses to input in existing models, it’s also beneficial to research how generative AI vendors are sourcing their training data. Companies that refuse to explain any of these processes in their documentation should raise alarms for your organization and be avoided.
Even if your organization unknowingly uses or benefits from data that was illegally sourced by the vendor, you may still be held responsible for any outputs that violate copyright laws or privacy rights.
3) Train Employees on Appropriate Data and Generative Model Usage
It’s a clear risk to have an AI use policy does not guarantee that all users will know and follow the policy’s specific rules.
That’s why it’s important to train employees with the policy as a foundation and then explain how that policy applies to specific roles, departments, and scenarios. This will help your organization to maintain optimal levels of security and compliance by avoiding individual user errors that are more difficult to detect. All employees should receive this training, but employees who work with your most sensitive data and applications should receive more detailed and frequent training.
For better AI use outcomes, training should extend beyond your policy and cover tips for how to detect AI bias, misinformation, and hallucinations. This will give your employees the confidence they need to use models well and make more calculated decisions for tasks like AI-driven data analytics.
Also see: The Benefits of Generative AI
4) Invest in Cybersecurity Tools That Address AI Security Risks
Generative AI models and artificial intelligence tools in general require strong cybersecurity protections to secure all of the information they contain. Unfortunately, many generative AI models do not have much native cybersecurity infrastructure in place, or these features are so difficult to configure that most users do not set them up.
To protect input and output data against cybersecurity threats, you should consider any generative AI models you use as part of your network’s attack surface and set up your network security tools accordingly. If you’re not already using cybersecurity tools to protect your enterprise network, we recommend investing in tools like the following ones that are designed with AI and other modern attack surfaces in mind:

Identity and access management.
Data encryption and data security tools.
Cloud security posture management (CSPM).
Penetration testing.
Extended detection and response (XDR).
Threat intelligence.
Data loss prevention (DLP).

5) Build a Team of AI Quality Assurance Analysts
If you’re planning to use AI models on a larger scale, it’s worth hiring or training existing employees to work as AI quality assurance analysts or engineers. These individuals are responsible for maintaining and monitoring both the AI models your organization uses and the data that goes into these models.
Some of the most important responsibilities of an AI quality assurance analyst are:

Detecting issues in both training data and input data.
QA testing outputs to ensure generated content is accurate, original, and appropriately attributed.
Monitoring generative models, APIs, and other relevant infrastructure for unusual behaviors.
Maintaining regular documentation of model behavioral anomalies and data inaccuracies to support future audits.
Working closely with cybersecurity and data science teams to use generative AI tools effectively.

6) Research Generative AI Models Before Using Them
While generative AI vendors are not currently required by law to offer transparent documentation or information about their models’ training, a number of them have expanded their documentation by customer request.
As an example, OpenAI provides a publicly available research index with regularly released papers on individual products and the steps the company is taking to make large language models safer and more efficient. The company also has several legal terms and policies, including usage policies and API data usage policies.
Generative AI customers should read this type of research not only to better understand the full range of capabilities these tools possess but also to determine if data has been sourced and models have been trained in ways that align with company values and industry regulations.
It’s also prudent to keep up with news and lawsuits related to AI companies, as this information could impact what products you choose to use moving forward.
Also see: Generative AI Companies: Top 12 Leaders
Bottom Line: Mitigating Generative AI Risks
Generative AI models offer a number of automations, shortcuts, and easy answers to enterprise users, making them great tools for growth and innovation. In fact, these models are so advanced and scalable that they’re being used for complex tasks ranging from drug discovery to smart manufacturing.
But using generative AI models comes with serious risks if you’re unwilling to research and bolster these solutions with the right cybersecurity tools, policies, and best practices. These models are changing and expanding their capabilities on an almost-daily basis. So it’s important to stay up-to-date on their potential use cases, how they are being regulated, and what you can do to make sure internal use aligns with both larger regulations and your organization’s own policies and best practices.
Also see: 100+ Top AI Companies 2023
The post Risks of Generative AI: 6 Risk Management Tips appeared first on eWEEK.
https://www.eweek.com/artificial-intelligence/generative-ai-risks/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 2 mai - 10:46 CEST