MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
generative
Recherche

Generative AI Ethics: 10 Ethical Challenges (With Best Practices)

vendredi 9 août 2024, 23:00 , par eWeek
Generative AI ethics is an increasingly urgent issue for users, businesses, and regulators as the technology becomes both more mainstream and more powerful. Although still a nascent form of artificial intelligence, generative AI (GenAI) has already attracted enormous investment due to its remarkable ability to generate original, human-like content based on massive datasets and neural network technology. This ability raises challenging questions about how developers and users of generative AI can remain compliant with privacy, security, and intellectual property regulations, making the need to establish clear guardrails and guiding ethical principles paramount.

Businesses need a clear understanding of how to use generative AI responsibly and how to align their goals for the technology with their company values to protect customers, data, and their business operations, and vendors need legal and ethical frameworks for developing and training GenAI tools to ensure they’re contributing to the appropriate use of the technology moving forward. Here’s what you need to know.

KEY TAKEAWAYS

•Ethics in generative AI are important because this type of AI has enormous potential, but also enormous potential to be disruptive. (Jump to Section)
•As the EU AI Act and various other international, national, and industry regulations are imposed to manage generative AI ethical issues, businesses will need to keep pace to ensure compliance. (Jump to Section)

In addition to following conventional best practices when using enterprise technology, business leaders can benefit from a growing portfolio of online generative AI ethics courses and certification programs. (Jump to Section)

TABLE OF CONTENTS
ToggleGenerative AI Ethics: 10 Key Ethical ChallengesWhy Are Generative AI Ethics Important?Best Practices for Using Generative AI EthicallyGenerative AI Ethics Laws and Frameworks3 Top Generative AI Ethics Courses to ConsiderBottom Line: Generative AI Ethics Remains Challenging

Generative AI Ethics: 10 Key Ethical Challenges

Ethical challenges abound with generative AI, given the relative newness and remarkable potential of this form of artificial intelligence. These ethical issues include concerns around privacy, security, accountability, environmental impact, and more. The degree of difficulty each of these challenges offers varies considerably, but taken in their totality, they require considerable resources to handle.

Preventing Bias in Datasets

Like other types of artificial intelligence, a generative AI model is only as good as its training data is diverse and unbiased. Biased training data can teach AI models to treat certain groups of people disrespectfully, spread propaganda or fake news, and create offensive images or content that targets marginalized groups. Less directly harmful, but still problematic, content inaccuracies can perpetuate outdated cultural tropes as facts.

Protecting the Privacy of Users

Whether you elect to use generative AI technology or not, there’s a chance your personal data could be used without your knowledge as part of the model’s training dataset. For example, some models collect training data from unauthorized or unverified corners of the internet, where your information may live without your consent. More commonly, users of chatbot tools like ChatGPT will create a free account and use the free version of the tool without fully understanding when and how their data is collected and used. Depending on the type of data you choose to submit in these tools, it could lead to identity theft, credit card theft, and other personal violations as a result of your personal data being exposed.

OpenAI recently added a new ChatGPT feature that lets free plan users turn on a temporary chat that won’t save their data for training purposes. While this is a great step toward privacy, many users still aren’t aware of this feature and don’t realize that ChatGPT can and often will save your inputs as training data if you don’t turn this feature on.

OpenAI recently added a new ChatGPT feature for free plan users.

Increasing the Transparency of Training Processes

Companies like OpenAI are working hard to make their training processes more transparent, but for the most part, it isn’t clear what kinds of data are being used, where training datasets are being collected, and how they’re being used to train generative AI models. This limited transparency not only raises concerns about possible data theft or misuse but also makes it more difficult to test the quality and accuracy of a generative AI model’s outputs and the references on which they’re based.

In addition to updating their policies and sharing their development plans, some generative AI leaders are adding additional features so users can check the sources behind generated information. For example, Google Gemini lets users click an icon to reveal sources if they’re unsure about how and where information was sourced. In some cases, Gemini will highlight content in orange if it cannot definitely prove the content’s source.

Users can click the Google icon in the main text bar to force Gemini to reveal its sources.

Holding Developers Accountable for Content

Accountability is difficult to achieve with generative AI precisely because of how the technology works. Since AI runs on algorithms that allow the technology to functionally make independent decisions, AI developers and companies frequently argue that they cannot control AI hallucinations or other reckless decisions that an AI tool makes algorithmically. So far, this lack of transparency has effectively gotten AI companies off the hook for offensive content outputs, as they claim ignorance about how these models are evolving over time.

Preventing AI-Assisted Cyberthreats

Although generative AI tools can be used to support cybersecurity efforts, they can also be jailbroken and/or used in ways that put security in jeopardy. For example, ChatGPT tricked a TaskRabbit worker into solving a CAPTCHA puzzle on behalf of the tool by “pretending” to be a blind individual who needed support to receive this assistance.

The advanced training these tools receive to produce human-like content gives them the ability to convincingly manipulate humans through phishing attacks, adding a non-human and unpredictable element to an already volatile cybersecurity landscape. Many of these tools also have little to no built-in cybersecurity protections and infrastructure. As a result, unless your organization is dedicated to protecting your chosen generative AI tools as part of your greater attack surface, the data you use in these tools could more easily be breached and compromised by a bad actor.

Mitigating the Environmental Impact of AI

Generative AI models consume massive amounts of energy very quickly, both as they’re being trained and as they handle user queries. The latest generative AI tools have not had their carbon footprints studied as closely as other technologies, yet even as early as 2019, research indicated that BERT models (a type of large language model) had carbon emissions roughly equivalent to the emissions of a roundtrip flight for one person in an airplane. Keep in mind this amount is just the emissions from one model during training on a GPU. As these models continue to grow in size, use cases, and sophistication, their environmental impact will surely increase if strong regulations aren’t put in place.

Guarding Against Misuse of AI

Whether intentional or not, it’s very easy to misuse a generative AI tool in a way that compromises data security and privacy or otherwise causes harm. For example, an employee in a healthcare setting may accidentally expose key patient or payment information to a generative AI tool in a way that compromises that data and allows it to be stolen.

In other, less well-meaning cases, generative AI may be used to generate AI deepfakes, or realistic-looking videos, images, audio clips, and texts that are wrongly attributed to someone in order to make them look bad or spread misinformation. In these cases and many more, generative AI is an all-to-eager assistant in the chaos, though many AI companies are working to reduce the chances for harmful content generation and misconduct by users.

Protecting Intellectual Property Rights

Especially in opposition to AI image and video generation tools, several artists and creators have come forward with claims and lawsuits stating that AI companies are using their original artwork and IP without their permission. Stability AI, Midjourney, and DeviantArt are three of the most notorious AI companies in this regard, with users collectively suing them for training their image models on copyrighted images without consent. Most of these cases and similar ones are still working their way through the legal system, so it’s unclear what the outcome will be and how it will impact IP cases in the future.

Understanding AI’s Impact on Employment

As generative AI use cases become more mature and capable of handling workplace tasks, there’s a growing fear that artificial intelligence will replace large sections of the workforce. There really aren’t any protections in place for human workers against AI bots, at least at this time, so as this technology progresses, people will need to upskill or shift to a different industry in order to remain competitive. It is very likely that certain types of creative, clerical, and technical roles will be replaced or partially upended by generative AI in the coming years.

Addressing the Need for More Regulation

Some basic AI regulations already exist, but few address the complexities and nuances of generative AI versus traditional AI and machine learning. What’s more, few national or international laws and regulations have actually been passed into law, with the EU AI Act being the most notable exception. For the most part, there are only best practices and recommended use policies, which still leaves room for AI and tech corporations to more or less do as they please with their technology and users’ data.

Why Are Generative AI Ethics Important?

The development of an ethical framework for generative AI is critical for a technology powerful enough to replicate human output in so many ways because it’s far too easy to unintentionally misuse its potential. Such misuse can create major problems, including legal liabilities. Developing guidelines to monitor and manage generative AI is necessary to help your organization do the following:

Protect customers and their personal data

Protect proprietary corporate data

Protect creators and their ownership and rights over their work

Prevent dangerous biases and falsehoods from being proliferated

Supplement existing cybersecurity frameworks and best practices

Align with emerging governmental AI and data compliance regulations

To learn more about how businesses are creating guidelines for responsible AI usage, read our guide to AI policy and governance.

Best Practices for Using Generative AI Ethically

It’s clearly perilous for a company to adopt generative AI without clear guidelines in place. The core best practices for ethical use of generative AI focus on training employees, implementing data security procedures, continuously fact-checking an AI system’s output, and establishing acceptable use policies.

Train Employees on Ethical AI Use

If employees are allowed to use generative AI in their daily work, it’s important to train them on what does and doesn’t count as an appropriate use case for AI technology. For the best possible outcomes, train your staff on what data they can and absolutely cannot use as inputs in generative AI models. This will be especially important if your organization is subject to regional or industry-specific regulations.

Additionally, if generative AI is part of your organization’s internal workflow or operations, it’s best if your customers are aware of this, especially when it comes to their personal data and how it’s used. Train your staff to be transparent about this, and explain on your website and to customers directly how you’re using generative AI to make your products and services better. Most important: Clearly state what steps you’re taking to further protect your customers’ data.

Implement Strong Data Security and Management

If your team wants to use generative AI to get more insights from sensitive corporate or consumer data, certain data security and data management steps should be taken to protect any data that’s used as inputs in a generative AI model. Data encryption, digital twins, data anonymization, and similar data security techniques can be helpful methods for protecting your data while still getting the most out of generative AI. Highly powerful modern cybersecurity solutions, such as extended detection and response (XDR) tools, may also help to protect this unconventional but highly vulnerable attack surface.

Check AI Responses for Accuracy and Appropriateness

Generative AI tools may seem like they’re “thinking” and generating truth-based answers, but what they’re trained to do is produce the most logical sequence of content based on the inputs users give and the algorithms on which they’re trained. Though generative AI models generally give accurate and helpful responses through this training, they still can produce false information that sounds true.

Make sure your team is aware of this shortcoming and does not rely on the tool for research needs. Online and industry-specific resources should be used to fact-check all responses received from generative AI tools, especially if you plan to base research, product development, or new customer initiatives on content generated through this technology.

Establish and Enforce Acceptable Use Policies

An acceptable use policy should cover in detail how your employees are allowed to use artificial intelligence in the workplace. This policy must include the ethical expectations of your organization as well as any regional or industry regulations that need to be followed.

Significantly, this acceptable use policy must name various managers and company officers as the responsible parties to monitor the organization—consistently—and hold their respective divisions accountable for adhering to written AI policies. Enforcement procedures and penalties for misuse and non-adherence should be carefully spelled out and distributed to the staff on an ongoing basis.

Generative AI Ethics Laws and Frameworks

Your organization cannot create its policy for ethical use of generative AI in a vacuum. There are a number of national and international regulations and policy standards that should be studied for guidance. Remaining in true compliance over time means staying current with these regulations, which are expected to change rapidly going forward.

European Union AI Act

To date, the only major international AI regulation that has become law is the European Union’s AI Act, which regulates artificial intelligence and related data usage across the EU. The act enforces rules for AI systems that pose any risk to consumer privacy and includes various rules and obligations for developers, deployers, and consumers of AI products. It is heavily focused on transparency and risk mitigation.

The EU AI Act was adopted by the European Parliament and Council in May 2024 and officially published in the Official Journal of the EU two months later. It will be enforced in phases, with some aspects going into effect within a month of publication while others may not be enforced until two or three years later.

Current enforcement mechanisms include a non-legally-binding Code of Practice that requires AI companies to provide technical documentation to the appropriate authorities, provide information on capabilities and limitations to downstream providers, summarize what training data they use, and set up and follow policies that comply with EU copyright laws. Since this is not legally binding, it’s difficult to say what happens to organizations that do not comply; however, stricter legal consequences will likely come once EU AI Act standards are fully in effect.

Other International Regulations

A number of standards bodies around the world have published resources for guidance and support. The most widely recognized include the following:

The European Union’s ethics guidelines for trustworthy AI

The Organisation for Economic Cooperation and Development’s “OECD AI Principles”

SHRM’s “Generative Artificial Intelligence (AI) Chatbot Usage Policy”

U.S. Policies and Standards

The United States government has expressed concerns about the quick development and unfettered growth of AI companies, but for the most part, AI regulations are not yet law. Instead, the government has focused on releasing voluntary frameworks and ethical documents that organizations can choose to follow or abstain from. These are two of the most notable:

NIST’s “Artificial Intelligence Risk Management Framework”

The White House’s “Blueprint for an AI Bill of Rights”

Industry-Specific Frameworks

While several industries have developed their own frameworks for AI usage, for the most part, this is happening on an organization-by-organization and case-by-case basis. Some highly regulated industries, however, are using existing laws and frameworks to ensure sensitive data is protected when used in AI systems:

Healthcare: Both FDA regulations and regulations that are part of HIPAA are used to ensure AI-driven medical devices and all other AI instances used in healthcare are following existing protocols for data privacy and security.

Finance: SEC regulations and fair lending laws are being applied to AI technology to ensure that trading, investing, and lending decisions that are informed by AI do not discriminate against certain consumers or fail to disclose when and how AI is being used to make those decisions.

Transportation: Particularly in autonomous vehicle AI technology, the National Highway Traffic Safety Administration (NHTSA) and several state transportation regulations are requiring generative AI companies to demonstrate comprehensive safety and performance testing before their vehicles can become street legal.

Emerging Trends in AI Ethics Regulations

AI ethics has quickly become a popular topic in the legal field, especially as lawsuits related to intellectual property theft, data breaches, and more come to the fore. Current areas of focus for AI ethics in the legal system include AI liability, algorithmic accountability, IP rights, and support for employees whose careers are derailed by AI development. As more AI regulations pass into law, standards for how to deal with each of these issues individually are likely to pass into law as well.

3 Top Generative AI Ethics Courses to Consider

Given the many ethical complexities created by the rise of generative AI, business decision makers are increasingly looking for formal coursework to gain the necessary background to navigate these challenges. We’ve identified three online courses that provide a mix of theory and practice to help industry professionals best understand the core issues involved with ethics and generative AI.

Generative AI: Implications and Opportunities for Business

This generative AI course from Class Central, RMIT University, and FutureLearn is a beginner-friendly course that focuses on developing a basic understanding of the technology, how it applies to different industries and use cases, and challenges that come with adopting and using this technology at scale. In the final week of the four-week course, users focus primarily on challenges and ethical dilemmas with the technology, looking at best practices for adoption and use, legal and regulatory considerations, and the latest trends and controversies with the technology. This course costs $189.

Visit Class Central’s GenAI Course

Generative AI Law and Ethics Cornell Course

The Generative AI Law and Ethics Cornell Course is an online course from Cornell University that discusses the latest laws and ethical concerns and solutions in the world of generative AI. The course is two weeks long and requires six to eight hours of work per week. It is designed primarily for business leaders, entrepreneurs, and other employees who are hoping to use AI effectively within their organization. The class is taught by a Cornell University professor of law and covers AI performance guarantees, the consequences of using AI, legal liability for AI outcomes, and how copyright laws specifically apply with AI. Cornell University does not publicly disclose the cost, but an AI bot on the site noted that courses typically cost around $3,600.

Visit Cornell’s GenAI Course

Generative AI: Impact, Considerations, and Ethical Issues

Generative AI: Impact, Considerations, and Ethical Issues is a Coursera course that partners with IBM to teach users about generative AI limitations, ethical issues and misuse concerns, how to use generative AI responsibly, and the economic and social impact of generative AI. The course is a beginner-level class that includes approximately five hours of online coursework and six assessments. Upon completion, users can get a shareable LinkedIn certificate and/or use this course to work toward a Generative AI Fundamentals Specialization. This course is free of charge.

Visit IBM’s GenAI Ethical Issues Course

See the eWeek guide to the best generative AI certifications for a broad overview of the top courses covering this form of artificial intelligence.

Bottom Line: Generative AI Ethics Remains Challenging

It’s often challenging to know if you’re using generative AI ethically because the technology is so new and the creators behind it are still uncovering new generative AI use cases, many of which create their own concerns. And even as generative AI technology is changing on what feels like a daily basis, there are still few legally mandated regulations surrounding this type of technology and its proper usage.

Yet despite these challenges, you owe it to your customers, your employees, and your organization’s long-term success to establish your own ethical use policies for generative AI long before regulations require this commitment.

To learn more about the power and potential of these emerging applications, see eWeek’s guide to generative AI tools and applications.

The post Generative AI Ethics: 10 Ethical Challenges (With Best Practices) appeared first on eWEEK.
https://www.eweek.com/artificial-intelligence/generative-ai-ethics/

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 19 sept. - 03:22 CEST