MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
conversations
Recherche

Are humans reading your AI conversations?

mercredi 23 octobre 2024, 12:00 , par ComputerWorld
Are humans reading your AI conversations?
Generative AI (genAI) is taking over the tech industry. From Microsoft’s genAI-assistant-turned-therapist Copilot being pinned to the Windows taskbar to Google’s Android operating system being “designed with AI at its core,” you can’t install a software update anymore without getting a new whizz-bang AI feature that promises to boost your productivity.

But, when you talk to AI, you’re not just talking to AI. A human might well look at your conversations, meaning they aren’t as private as you might expect. This is a big deal for both businesses working with sensitive information as well as individuals asking questions about medical issues, personal problems, or anything else they might not want someone else to know about.

Some AI companies train their large language models (LLMs) based on conversations. This is a common concern — that your business data or personal details might become part of a model and leak out to other people. But there’s a whole other concern beyond that, and it could be an issue even if your AI provider promises never to train its models on the data you feed it.

Want more about the future of AI on PCs? My free Windows Intelligence newsletter delivers all the best Windows tips straight to your inbox. Plus, you’ll get free in-depth Windows Field Guides as a special welcome bonus!

Why humans are reviewing AI conversations

So why are humans looking at those conversations? It’s all about quality assurance and spotting problems. GenAI companies may have humans review chat logs to see how the technology is performing. If an error occurs, they can identify it. Think of it as performing “spot checks” with feedback from human reviewers then used to train the genAI model and improve how it will respond in the future.

Companies also review conversations when they suspect abuse of their service. It’s easy to imagine that the companies could also use AI tools themselves to dig through the masses of chat logs and find ones where there seems to be some sort of problem or a safety issue.

This isn’t new to AI. For example, Microsoft has had contractors listening to people’s Skype audio conversations for quality assurance purposes as well. Yikes.

A real privacy concern

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day.

But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US.

For many knowledge workers, it’s tempting to give ChatGPT a big data set of customer details or company financial documents and have it do some of that informational grunt work. But, again, a human reviewer might see that data. The same is true when these tools are put to personal use.

Humans may review your conversations with Microsoft Copilot.Chris Hoffman, IDG

Do ChatGPT, Copilot, and Gemini use human reviewers?

To be clear, all signs suggest humans are not actively reading the vast majority of conversations with AI chatbots. There are far too many conversations to make that possible. Still, the main genAI tools you’ve heard of do at least occasionally use human reviews.

For example:

ChatGPT lets you turn off chat history by activating a “temporary chat.” With chat history on, the conversations will be used to train OpenAI’s models. With a temporary chat, your conversations won’t be used for model training, but they will be stored for 30 days for possible review by OpenAI “for safety purposes.” ChatGPT’s Enterprise plans provide more data protections, but human reviewers are still involved at times.

Microsoft says Copilot conversations are also reviewed by humans in some situations: “We include human feedback from AI trainers and employees in our training process. For example, human feedback that reinforces a quality output to a user’s prompt, improving the end user experience.”

Google’s Gemini also uses human reviewers. Google spells it out: “Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.”

ChatGPT’s Temporary Chat option provides more privacy, but humans may still review some of your conversations.Chris Hoffman, IDG

How to ensure no one is reading your AI conversations

Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.

In the long run, AI models that run locally could prove to be the ideal answer. The dream is that an AI chatbot would run entirely on your own computer or mobile device and wouldn’t ever need to “phone home.” Companies could run their own AI software in their own data centers, if they so chose — keeping all of their data entirely under their own control.

Despite all the criticism of Microsoft’s Recall tool, which will let you search through your Windows 11 desktop usage on a Copilot+ PC when it launches, Recall had the right idea in many ways. It will do everything on your own PC without sending things to Microsoft. Human reviewers won’t see it.

On the flip side, Google recently launched AI history search for Chrome — and, again, human reviewers might examine your browser history searches if you try it out.

Google warns that humans may see your browsing history if you turn on AI history search in Chrome.Chris Hoffman, IDG

Two sides of the AI-human question

Let’s come back to earth. I don’t mean to be Chicken Little here: The average person’s ChatGPT conversations or Copilot chats probably aren’t being reviewed. But what’s important to remember is that they could be. That’s part of the deal when you sign up to use these services. And now more than ever, that’s something critical for everyone to keep in mind — from businesses using AI professionally to people chatting with Copilot about their hopes and dreams.

Let’s stay in touch! My free Windows Intelligence newsletter delivers all the best Windows advice straight to your inbox. Plus, get free Windows Field Guides just for subscribing!
https://www.computerworld.com/article/3574984/are-humans-reading-your-ai-conversations.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 24 oct. - 00:28 CEST