Navigation
Recherche
|
OpenAI offers help promoting AI outside the US, but analysts question why countries would accept
jeudi 8 mai 2025, 02:26 , par ComputerWorld
OpenAI, acting as part of the US government-led Stargate AI project, on Wednesday rolled out a program called OpenAI for Countries. The idea is for Stargate to help other countries create their own genAI environments, including data centers and genAI models.
But analysts argue that other countries might be hesitant to join a US government-led effort, given the sensitive issues of data privacy and business intellectual property. Alvin Nguyen, a senior analyst with Forrester, said that this might not be the ideal time to champion the United States as the technology beacon to emulate. “If it is tied to the US government, there will be questions in terms of what gets shared to move the models forward. That is going to be important,” Nguyen said. OpenAI “may not be able to fully separate itself from Stargate.” Nguyen said that various governments might want to explore alternative approaches to partnering with a US government-led effort. “I don’t know if that is in their interest right now, given the state of geopolitics.” Gartner analyst Arun Chandrasekaran agreed. “Several countries already have parallel sovereign AI efforts, and whether they choose to partner with OpenAI is yet to be seen,” Chandrasekaran said. “Countries are striving to create a vibrant AI ecosystem that isn’t dependent on a single provider – which is an undercurrent that OpenAI and its partners need to navigate.” Chandrasekaran added, “there is not a compelling reason that this would resonate [with other countries]. OpenAI has a very steep chasm to cross in terms of convincing these customers about the data sovereignty aspect. This is not going to be an easy thing to pull off.” The statement issued by OpenAI was not clear whether the effort is solely from OpenAI or from the US government-led coalition for AI called Stargate, which has as charter members OpenAI, Oracle, and Softbank. It appeared to be introduced by OpenAI, but with OpenAI acting as a key member of Stargate and not on its own as an AI vendor. The statement said that the initiative is in response to requests from foreign governments. “We’ve heard from many countries asking for help in building out similar AI infrastructure—that they want their own Stargates and similar projects,” it said. “It’s clear to everyone now that this kind of infrastructure is going to be the backbone of future economic growth and national development.” The statement did not identify any of these countries, and OpenAI did not respond to a Computerworld request for an interview. Statement phrasing ‘could prove unhelpful’ Analysts and other industry observers said that the language OpenAI used in the statement might itself cause hesitation among potential government partners, especially in Europe. “We want to help these countries, and in the process, spread democratic AI, which means the development, use and deployment of AI that protects and incorporates long-standing democratic principles,” the statement said. “We believe that partnering closely with the US government is the best way to advance democratic AI” and “provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power.” Forrester’s Nguyen said the phrasing might prove unhelpful to OpenAI’s sales efforts. “Saying ‘US led’ and ‘Democratic AI,’ that may not be universally desired by every government, every country out there,” Nguyen said. The OpenAI for Countries effort includes several elements, including helping to build “in-country data center capacity,” delivering “customized ChatGPT,” and to “raise and deploy a national start-up fund.” In exchange, the statement said, “partner countries also would invest in expanding the global Stargate Project—and thus in continued US-led AI leadership and a global, growing network effect for democratic AI.” The statement said that the group’s goal “is to pursue 10 projects with individual countries or regions as the first phase of this initiative and expand from there.” Another benefit to OpenAI in this effort would be the opportunity to gather as much non-English data as possible to train future model versions. The lack of non-English training data has weakened the effectiveness of the genAI models from just about all of the major model makers. Data protection crucial Christian Khoury is the CEO of a Toronto-based AI company called Easy Audit, which sells compliance automation platforms. “Most genAI models outside English are half-baked at best. I’ve seen firsthand how broken these tools get when applied to anything multilingual or local,” Khoury said. “OpenAI acknowledging that and putting serious resources into solving it is a big deal.” Khoury argued that data protections are going to be critical if OpenAI’s global efforts have a chance of working. “The countries that are going to be implementing and installing OpenAI models need real data sovereignty with enforceable contracts,” Khoury said, acknowledging that it can be challenging to enforce legal contracts across national borders. “There’s a fine line between infrastructure support and digital colonization. If these partnerships are just democracy-washed ways to expand US AI dominance, countries will catch on fast,” Khoury added. “To make this work, OpenAI has to treat local data, languages, and governance as assets and not just variables to plug into a US-built model. Sovereign AI means local control, not just local hosting.” He also said that he is “watching how this plays with their safety commitments. ‘Democratic AI’ sounds great, but the hard part is making sure it can’t be quietly flipped to authoritarian ends down the line. Infrastructure is easy. Guardrails are hard. The world doesn’t need another digital Belt and Road.” To make it work, Khoury said, “third-party audits need to happen and I need to choose my own third-party auditors to have red teams to stress test the models for bias and manipulation. We are trying to avoid US intelligence tampering with the model.” Khoury stressed that data protections must not only be strict, but must be transparent. “Who gets to keep what data? And how are you protecting those things? What measures are being put in place to safeguard each country’s intellectual property?” Khoury asked. “How do you install a fence around that data to ensure that it doesn’t get out?” Brian Jackson, principal research director at Info-Tech Research Group, also questioned how foreign governments would view OpenAI’s take on data sovereignty. “OpenAI says it would help countries build sovereign data center capacity. But would a data center built with a foreign partner truly be trusted as sovereign?” he asked. “And OpenAI says it will raise and deploy a national start-up fund that includes its own capital. But would we really expect that fund to be supportive of local AI efforts to compete with OpenAI offerings? The conflicts of interest are apparent and problematic.” Victor Tabaac, the chief revenue officer at AI consulting firm All In Data, agreed that data controls are where this OpenAI effort will go. “Governments will demand control over data and outputs, potentially creating conflicts with OpenAI’s principles. There’s also a risk of vendor lock-in, as countries may prefer open-source alternatives,” Tabaac said. “Partnering with governments isn’t just about better data—it’s a geopolitical minefield. Countries will demand control over how models are trained and used. Will they allow Saudi Arabia to censor outputs on religion? Or let the EU retroactively edit models under GDPR? Transparency will make or break trust here.” Potential conflict of interest Jackson pointed out that there are plenty of potential conflicts of interest in what OpenAI said it was trying to do. “OpenAI is saying that it can help govern AI or evolve ‘security and safety controls.’ However, clearly, as a company that stands to profit from AI adoption, there could be a conflict of interest here. If this partnership program is successful, it continues a trend that we’re seeing away from public sector-supported frameworks to govern technology and toward private-sector best practices,” he said. “We should also consider how seriously other countries will take OpenAI’s claim that it will be an ally in providing democratic AI, something it hasn’t even clearly defined. It makes it clear that its primary partner is the US government. What are other countries that have recently entered into trade disputes or even more serious conflicts with the US to make of that association?” Jackson felt particularly strongly about where the current AI trends may lead if OpenAI delivers on its stated goals. “Let’s look at it from the perspective of the services that OpenAI is offering to bring to citizens through partnering with governments. There’s a concept called disintermediation, which examines how technology companies are usurping the relationships that democratic governments have with their citizens by providing the key information and services that citizens historically depended on the state for. What OpenAI is proposing would without a doubt represent a power shift from the state to a private company for a pretty considerable range of informational interactions,” he said. “For example, OpenAI suggests it could provide ‘customized ChatGPT to citizens,’ which would localize language and imbue cultural considerations into the service. The implication is that the partner government would then use this platform to deliver some set of services to those citizens. However, instead of [the government] owning the relationship with citizens, OpenAI captures that.”
https://www.computerworld.com/article/3980440/openai-offers-help-promoting-ai-outside-the-us-but-ana...
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 9 mai - 20:11 CEST
|