Navigation
Recherche
|
It’s time to push back against the AI internet
vendredi 26 septembre 2025, 13:00 , par ComputerWorld
The Dead Internet Theory is a false conspiracy theory. But in practical terms, it might as well be true.
Emerging from the deranged muck of 4chan and Wizardchan in the late 2010s, the Dead Internet Theory holds that secret cabals of all-powerful government or corporate conspirators use bots and AI-generated content to replace humans on the internet. The goal: to manipulate public perception, control narratives, and influence the public’s behavior. A central tenet of the theory is that most online content is generated by bots and AI, not people. The false part is the conspiracy. The true part is that most internet content is indeed bots and AI. Midway through 2025, roughly 74% of newly created content online was generated with the help of AI or bots, according to several large-scale studies. Only about one-quarter of online content is created by people without AI assistance. And the rate of change is rising fast. By the end of the year, more than 90% of all content will be AI-generated, according to some predictions. The Dead Internet Theory isn’t true. But it might as well be true. To quote the late comedian George Carlin: “You don’t need a formal conspiracy when interests converge.” And in Silicon Valley, the interests are definitely converging. GenAI content, now on an epic scale A company called Inception Point AI is churning out AI-generated podcasts on an industrial scale, powered by custom AI agents that leverage OpenAI, Perplexity, Claude, Gemini, and other chatbots to build the content. The company’s Quiet Please Podcast Network has created more than 5,000 podcast shows — not episodes, shows! — hosted by more than 50 AI “personalities.” The company intends to create thousands more “personalities” in the future. It costs them $1 per episode to produce. So, if an episode sells a $2 ad, they make a profit. Based on what the company’s CEO says in public, Inception Point AI seems captured by the delusion that AI-generated personas are human beings. It’s unknown how many listeners have been deluded into believing that the AI podcast hosts are real people. (The AI hosts identify themselves as such at the top of each episode.) “We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life,” said CEO Jeanine Wright, ignoring the advice of Microsoft AI CEO Mustafa Suleyman, who said in an essay last month that “We must build AI for people; not to be a digital person.” If they believe they’re “bringing people to life,” then they by definition believe themselves to be gods. She added that the “people who are still referring to all AI-generated content as AI slop are probably lazy Luddites.”(People who create a podcast start to finish by pushing a button are calling others “lazy.”) Of course, Inception Point AI isn’t alone in the industry. Others, including PodcastAI, Wondercraft AI, and Jellypod,are flooding the zone with fake-people podcasts, too. The AI podcast startups have a lot of catching up to do. The video startups are way ahead in terms of volume. Companies like T-Series, Sony SAB, SET India, and Zee TV have produced between 20,000 and 234,000 videos each. T-Series leads with nearly 24,000 videos, while some channels like Zee TV have exceeded 215,000 uploads. It’s everywhere Google is helping them. Creator AI Studios — which is built into YouTube’s ecosystem — enables small teams and solo creators to publish hundreds of videos per day through auto-editing, thumbnail generation, scene detection, and AI-generated scripts. Platforms like Argil AI, RightBlogger, Team-GPT, and Designs.ai let creators generate scripted TikTok videos without cameras using AI models for ideas, editing, and even synthetic voiceovers. AI-generated books are on the rise, too. Two years ago, Amazon had to cap book uploads to three books per daybecause people were uploading far more than that. Estimates suggest that more than 70% of new self-published Kindle books are partially or fully AI-generated. On Amazon.com alone, people may be collectively uploading as many as 1,000 AI-generated books per day. While the state of the art in text, audio, and video fakery ranges now from “very good” to “perfect,” the ability to make or alter photos using AI took a huge leap forward with Google Gemini 2.5 Flash Image (a.k.a. “Nano Banana”). Perfect pictures are a banality now. More than 15 billion images have been created using text-to-image algorithms since 2022. Roughly 34 million new such images are churned out every day now, using models based on Stable Diffusion and others. They’re used for business objectives in advertising, media and entertainment, e-commerce, fashion, and architecture, as well as for digital art and the lucrative field of online influencing. The technology is also used maliciously to create fake news and propaganda, scams, non-consensual deepfake pornography, and AI-generated child sexual abuse material. The key benefit of AI-generated images — whether for benign and malignant uses — is low cost. AI pictures are super cheap to make. Studies of platforms like Facebook show that users often fail to recognize images as synthetic, even when they’re badly made and look ridiculous. A 2025 Microsoft report, for instance, found that 73% of survey respondents found it hard to spot AI-generated images, and correctly identified them only 38% of the time. We are months away from an internet where more than 99% of online content is AI-generated. As a result, the human race is being rapidly frog-marched into a world where we interact primarily, or even exclusively, with machines. Whose idea was this? (Probably the same people who decided to replace “writers” and “readers” with “content creators” and “content consumers.”) Has it occurred to anyone that the purpose of “content” — articles, books, photographs, videos and the recorded voice — exists for human beings to communicate with each other, rather than for machines to shovel data at people? And why do we accept this? We need to start holding the content delivery companies to account. Demand prioritization for people-created content It’s time users demand, pay for, or exclusively use services that give you the choice to prioritize human-generated content. The paid-search engine Kagi Search, for example, enables users to avoid AI-generated content primarily through its image search filtering and labeling system, which allows users to choose whether to include, exclude, or exclusively display AI-generated images. Personalization features enable users to block or downrank specific domains if unwanted AI-generated or low-quality imagery slips past initial filters, and these controls are accessible both in image and web search. (Full disclosure: My son works at Kagi.) DuckDuckGo also allows users to filter out AI-generated images from search results by providing a dedicated dropdown menu in its image search interface, where users can select to hide all AI images. Some stock photo platforms, such as Freepik, have added tools to exclude AI-generated results from search queries. But this list is pathetically short. While Google offers hard-to-find and ineffectual “NOPE” buttons to turn off some AI content, Microsoft Bing, Facebook, Instagram, Reddit, X, LinkedIn, Pinterest, other sites do not offer users the ability to opt out of AI at all. If the industry is going to provide the tools for replacing nearly all online content with AI slop, then they must also provide the tools to opt out. We must demand the option to see content created by people either primarily or exclusively. It’s time for the living to rise up against the dead internet.
https://www.computerworld.com/article/4063408/its-time-to-push-back-against-the-ai-internet.html
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 26 sept. - 23:52 CEST
|