MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
data
Recherche

What you need to know about AI governance

lundi 23 septembre 2024, 11:00 , par InfoWorld
“You can use whatever AI tool you want to experiment in any way you want, using whatever data you want,” said no company executive ever. Executive leaders are focused on ensuring AI efforts target business opportunities while avoiding risks. Even the most cautious leaders are reluctant to ban AI outright, however, because failing to deliver AI-driven transformational outcomes risks business disruption.

In between these two hypothetical scenarios are the principles, practices, regulations, tools, and responsibilities defining how organizations seek to leverage AI capabilities, stay compliant, and avoid costly risks. AI governance refers to how organizations develop and document their AI operating model so that employees have clear guidelines.

AI governance can be a simple document outlining policies or a more encompassing operating model outlining guardrails, compliance, and procedures. How organizations define AI governance depends on their risk tolerance, industry regulations, and culture around innovation.

To better understand AI governance, consider how The AI Governance Alliance, an initiative by the World Economic Forum’s Centre for the Fourth Industrial Revolution, defines its mission. “Our work goes beyond governance, driving innovation and practical impacts across industries. We ensure AI enhances human capabilities, fosters inclusive growth, and promotes global prosperity.”

Stating a mission and objectives is an important first step in defining AI governance. The next step should be to answer key questions about where, how, and why employees should use AI capabilities. Below are seven questions to consider when developing your organization’s AI governance policies and procedures.

7 questions that define AI governance

What are the desired business objectives and outcomes from using AI?
What regulations and compliance requirements must employees follow when using AI?
How will employees use data sets in AI tools and models?
How must the organization’s data governance change to support genAI?
What AI copilots and tools can employees use?
How will employees validate AI results?
Where should employees go to learn more about AI?

What are the desired business objectives and outcomes from using AI?

How does the business want to steer its AI endeavors? Defining the North Star of promising objectives and ideal outcomes from AI usage is an important first step in AI governance.

“Many companies skip the most important step when it comes to AI experimentation: identifying specific use cases for this tech,” says Steve Smith, global director of strategic projects at Esker. “Instead of experimenting and deploying AI aimlessly, a common tactic resulting from the AI hype, stakeholders need to come up with a clear, actionable strategy.”

There’s an ongoing debate about whether AI governance should be intertwined with AI strategy or defined separately. Another approach is to update digital transformation and business strategies with AI focus areas rather than stating an independent AI strategy. Regardless of approach, stating the objectives for AI use in your organization should be one of your first guardrails.

“There are countless use cases for AI experimentation, but in order to maximize time and energy, businesses should align AI initiatives with strategic goals, focusing on evaluating data quality and reducing glaring operational pain points,” says Shibu Nambiar, global business leader, high technology, at Genpact. Beyond tackling the obvious challenges, strategic AI implementation can enhance customer experience, predict consumer behavior, and manage supply chains.”

What regulations and compliance requirements must be followed when using AI?

Once you’ve defined your organization’s goals and objectives for AI use, a common next step is to describe how AI may be used.

“Governance structures help ensure AI systems comply with regulations, avoiding legal and financial repercussions, says Rahul Pradhan, VP of product and strategy at Couchbase. “Enterprises must also adhere to industry regulations and standards, including data protection laws such as GDPR and HIPAA. Proper governance ensures data privacy is maintained and comprehensive security measures are in place to protect against data breaches and cyber threats.”

Organizational leaders must take steps to update employees on changing regulations and how they impact AI innovation programs. Leaders should also pay attention to public information about how other companies are implementing AI, including mistakes that lead to bad press. These stories are an opportunity to discuss the risks of AI and how to avoid them.

“Regulatory bodies are still in an experimentation phase with their AI governance strategies, but ultimately, AI governance will comprise three pillars: ethics, privacy, and safety, and all three are extremely complicated,” says Bharat Guruprakash, chief product officer at Algolia. “Identifying the data sources and their use will be key to understanding AI threats and putting adequate guardrails in place.”

Since regulations and risks are evolving, organizations should plan to update their AI governance policies and communicate the changes to employees.

How will employees use data sets in AI tools and models?

One of the big questions for AI governance to address is how employees can or cannot use company data in prompts when using generative AI tools. Data scientists also need to know what data they can use to train AI models or in retrieval augmented generation (RAG) integrations with large language models (LLMs).

“Companies adopting generative AI models need to think about what kind of data went into training the model, what’s the quality, what’s the license, and what are the potential biases,” says Dror Weiss, founder and CEO of Tabnine. “Another important aspect to consider is data security and privacy. What kind of business data will we send to query the AI models? Can we share this data with an external party, and if so, what does the model provider do with our data?”

Many employees have little understanding of the risks of AI misuse and data compliance, so it’s critical to explain the “why” behind data compliance policies.

“Getting it wrong comes with significant ethical implications related to data misuse and global privacy regulations, says Brian Stafford, president and CEO of Diligent. “Remaining informed and compliant requires a well-built and maintained AI governance framework that includes investing in both the proper software and education for the organization’s leadership.”

Many organizations define AI governance as an extension of data governance, which outlines policies and procedures for working with the organization’s intellectual property, confidential data, sensitive data, and other restricted data sets.

“While many data governance strategies can be tailored to fit AI governance needs, one that works well without tailoring considers the four “whats” and “hows” of data governance,” says Joe Regensburger, VP of research at Immuta.

Regensburger recommends answering the following questions to ensure the health and quality of data for genAI models:

What data is used to train the genAI models, including RAGs used with LLMs?

How is the model being trained and tested?

What controls exist on deployed AI models?

How can we assess the veracity of AI model outputs?

Data scientists also need a set of guidelines around the data samples used to train models. AI governance should outline requirements for avoiding biased training data and providing transparency to internal stakeholders on model development practices.

“It’s critical to select data that accurately reflects an entire population to ensure quality outputs are generated,” says Sanjeev Sawai, CTO of mPulse. “Working with diverse data sets and establishing best practices that constantly monitor for inaccuracies can create more equitable and transparent AI models that offer better experiences to their users. Responsibility for appropriate AI usage policy and training ultimately falls on organizations to prevent biased results or misuse.”

How must the organization’s data governance change to support genAI?

AI governance also necessitates reviewing and extending data governance. Areas to focus on are how data governance tools, including data catalogs, data quality, data pipelines, master data management, and customer data profiles, must now provide extended support for the unstructured data sources needed for genAI.

“Before any meaningful AI experimentation can occur, focus on establishing strong governance policies to ensure the connection and integrity of the data flowing through the company,” says Dalan Winbush of Quickbase.

Extensions are likely needed in several areas of data governance. Specification around unstructured data sources, including documents and content stored in SaaS, is one example. “Unfortunately, legacy data governance tools were designed in a structured data era, where governance was defensive and passive,” says Ramon Chen, CPO of Acceldata. “Increasingly, companies are turning to nextgen data observability platforms to handle data of all types and modalities, delivering real-time alerts and remediation by enforcing data quality and business policies to guarantee reliable data pipelines.”

AI training set data protection, AI prompt monitoring to ensure employees don’t divulge sensitive data, and output monitoring to protect secrets and intellectual property are all areas where data security and governance can assist AI governance says Ron Reiter, CTO & co-founder of Sentra. “By enabling organizations to empower diverse teams, including privacy, governance, and compliance roles, professionals can actively bolster data security posture without requiring specialized cybersecurity training.”

For companies in healthcare and other regulated industries, improving data quality and defining access rights are critical steps to meeting performance, privacy, and safety standards, says Sunil Venkataram, head of product at Wellframe, a HealthEdge company. “Organizations should leverage leading practices, tools, and trusted partners for data validation, auditing, monitoring, and reporting, as well as for detecting and mitigating potential biases, errors, or misuse of AI-generated data.”

What AI copilots and tools can employees use?

One of the key areas to address with employees is specifying what AI tools they can use in workflow, research, and experimentation. In addition to identifying the tools, a strong AI governance policy should specify what capabilities to use, what functions to avoid, who in the organization can use them, and what business processes should avoid using AI.

CIOs aim to standardize AI tools and avoid shadow IT, costly tool proliferation, and additional risk when employees share company data in tools without contracts defining the required data security. Policies should also define a process to enable employees to recommend new tools and capabilities for evaluation.

Leaders should also educate employees on best practices when using copilots. “Effective AI governance and having the right tools are crucial for reducing risk and maximizing benefits,” says Harry Wang, VP of strategic partnerships at Sonar. “Taking software development as an example, AI can speed up the development but also increase the risk of major disruptions from faulty code without proper checks.”

How must employees validate AI results?

Software developers already test functionality and perform code reviews, so copilot code generators add a new dimension and velocity to existing practices. However, business people using genAI may not have the same disciplines to validate results, and many famous AI disasters have hurt company brands and financials. AI governance can provide policies and practices regarding how employees should validate the results provided by a genAI tool or LLM.

“The most important thing to get right is helping your staff develop the required skills to use AI effectively in their business context,” says Weiss of Tabnine. “It takes considerable training to both learn to prompt effectively in any specific domain and to critically view the results, as most staff are not particularly used to their computer telling them potentially incorrect things.”

One of the issues is whether the AI tools are explainable and what factors the genAI considers when answering individual questions. “While most systems are deterministic, generative AI systems are nondeterministic, meaning results may not always be consistent or predictable,” says Simon Margolis, associate CTO of AI/ML at SADA.

Hema Raghavan, head of engineering and co-founder at Kumo AI, says, “Evaluating AI results is about calculating a metric for user satisfaction around relevance, reliability, and speed. Relevance means whether a user seeking information is satisfied with the answer presented, reliability is consistent answers, and speed means giving lightning-fast results.”

An AI governance plan for validating results may include:

Documentation of what internal information sources are used in RAGs.

Testing and validation procedures before putting any code, automation, or other AI result into operation use.

A centralized prompt library so that employees can see how others use a genAI tool.

“With AI advancements outpacing the rulesets governing AI use, business leaders need to prioritize educating employees on the key principles of responsible AI use: explainability, accountability, risk management, oversight, ethical alignment, bias control, and transparency,” says Ed Frederici, CTO of Appfire. “ Some tactics include auditing AI-driven outcomes to ensure outputs remain within expected bounds, establishing clear protocols for when AI decisions need human oversight, implementing systems where feedback can be used to improve the AI decision-making process, and ensuring AI decisions can be explained in understandable terms.”

Organizations in regulated industries or where human safety is a concern should define more specific testing requirements.

Where should employees go to learn more about AI?

AI governance should also have an eye on the future and guide employees on where to learn about AI technologies and best practices. Learning programs can help employees learn new skills and adapt to a rapidly changing world of virtual agents and machine intelligence.

“True progress combines AI’s capabilities with human insight, unleashing technology’s full potential to serve human needs and ethical standards,” says Shibu Nambiar, global business leader in high technology at Genpact. “By training employees to prioritize human-centric AI, we will create a future where technology empowers people, enhances creativity, and upholds our core values.”

Top organizations will consider leadership, innovation, skills, and other training to help employees re-envision products and work for the genAI era. While productivity is an early benefit, driving digital transformation with genAI will require both blue-sky planning and reinventing business models.
https://www.infoworld.com/article/3504671/what-you-need-to-know-about-ai-governance.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
ven. 15 nov. - 23:42 CET