Navigation
Recherche
|
Call to ban AI superintelligence could redraw the global tech race between the US and China
mercredi 22 octobre 2025, 15:08 , par ComputerWorld
More than 850 prominent figures have called for a prohibition on developing AI superintelligence, a move that could reshape enterprise AI investments and intensify the US-China technology race if adopted.
The open letter, released Wednesday by the Future of Life Institute, defined superintelligence as AI systems that “significantly outperform all humans on essentially all cognitive tasks” — going far beyond today’s chatbots and automation tools to systems that could autonomously make strategic decisions, rewrite their own code, and operate beyond human oversight. The signatories span an unusual political spectrum, from AI pioneers Geoffrey Hinton and Yoshua Bengio to Nobel laureates, Apple co-founder Steve Wozniak, and former Obama administration National Security Advisor Susan Rice. The diverse coalition suggested that AI governance is becoming a political issue that crosses traditional partisan lines. Yuval Noah Harari, author and professor at The Hebrew University of Jerusalem, and a signatory to the open letter, added in a personal note that “Superintelligence would likely break the very operating system of human civilization – and is completely unnecessary. If we instead focus on building controllable AI tools to help real people today, we can far more reliably and safely realize AI’s incredible benefits.” Missing from the list of signatories are current leaders of major AI companies, including OpenAI, Anthropic, Google, Meta, and Microsoft — reflecting a widening divide between those building advanced AI systems and those calling for constraints. For enterprises, the debate came as companies pour billions into AI infrastructure. Meta CEO Mark Zuckerberg established Meta Superintelligence Labs in June after investing $14.3 billion in Scale AI, while OpenAI’s Sam Altman said in January that OpenAI is shifting focus to superintelligence development. This is the second time the Future of Life Institute has organized such a campaign. In March 2023, the organization published a letter calling for a six-month pause on training AI systems more powerful than GPT-4. That letter garnered over 30,000 signatures but was ignored by AI companies. Not an immediate enterprise concern Analysts say superintelligence remains a distant theoretical risk rather than an operational concern for enterprise IT planning. “Superintelligence remains a long-horizon theoretical risk, not an operating concern within the 2025–2028 enterprise planning window,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “CIOs must resist conflating vendor ambition with business utility.” Gogia said the strategic priority is to stabilize and scale current-generation AI through data governance, model explainability, and validation practices. “The existential discourse around superintelligence belongs to regulators and ethicists; CIOs must build with the tools and truths of the present,” he said. The letter warned that superintelligence poses risks beyond current workforce disruption from AI. “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” the statement said. The concern centers on what AI researchers call the alignment problem — ensuring systems smarter than humans pursue goals compatible with human values. Current techniques work for today’s AI but may prove inadequate for systems that surpass human intelligence, according to IBM research on superalignment. Beyond the theoretical risks of superintelligence, current AI systems are already reshaping enterprise workforces. According to a September report from Indeed, 26% of jobs posted over the past year are poised to transform due to generative AI, with technology and finance roles facing the highest risk. Goldman Sachs Research estimates that if current AI use cases were expanded across the economy, 2.5% of US employment would be at risk of displacement. Salesforce CEO Marc Benioff said in August that the firm cut customer support roles from 9,000 to about 5,000 due to AI capabilities. Competitive and governance implications While current AI is already disrupting workforces, a ban on superintelligence development would reshape the competitive landscape for AI vendors. If frontier labs are compelled by regulation to slow superintelligence development, the competitive balance would shift toward providers with domain-specific, controllable AI architectures, according to Gogia. “Enterprise buyers are already showing strong preference for ‘adequate-but-verifiable’ models over black-box frontier systems, particularly in regulated or safety-critical contexts,” he said. “A regulated slowdown would accelerate demand for small language models, sovereign AI stacks, and enterprise-hosted fine-tunes with clear lineage, reproducibility, and policy controls.” A ban could also have a significant economic impact. Gartner forecasts global AI spending to reach nearly $1.5 trillion in 2025 and top $2 trillion in 2026, driven by hyperscale data center investments and enterprise adoption. The International Monetary Fund projects AI will boost global GDP by approximately 0.5% annually between 2025 and 2030. More critically for US companies, a unilateral ban could accelerate China’s AI development. China has made significant advances in AI despite US export restrictions, with companies like DeepSeek and Alibaba releasing competitive open-source models that have shaken Silicon Valley. The US is seen as having about a five-year lead in generative AI today, but China’s massive investments could narrow that gap. If the prohibition call fails to gain traction, as its 2023 predecessor did, enterprises must assume AI governance is a board-level obligation, according to Gogia. “In the continued absence of enforceable global rules on AI development and deployment, responsibility decentralizes to the enterprise level,” he said. “CIOs must construct guardrails rooted in operational ethics, legal defensibility, and stakeholder trust.” This includes formalizing AI councils, embedding incident-response protocols, and signing vendor contracts with AI-specific obligations, including dataset transparency and audit rights, Gogia said.
https://www.computerworld.com/article/4077074/call-to-ban-ai-superintelligence-could-redraw-the-glob...
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 23 oct. - 00:51 CEST
|