|
Navigation
Recherche
|
OpenAI expands data residency for enterprise customers
mercredi 26 novembre 2025, 13:17 , par ComputerWorld
OpenAI has expanded its data-residency options for enterprise customers, specifically its ChatGPT Enterprise, ChatGPT Edu, and API users. The move, as per analysts, could clear one of the biggest hurdles holding enterprises back from adopting the company’s LLM stack at scale.
“Enterprises can move from small pilots to full deployments without violating their jurisdiction’s rules on where data should live. The reality is that, earlier, most security and compliance teams weren’t rejecting GenAI because of model design; they were rejecting it because storing data in the US or EU pushed them into conflict with GDPR, India’s incoming DPDPA norms, UAE’s federal rules, or sector-specific mandates like PCI-DSS,” said Akshat Tyagi, associate practice leader at HFS Research. The data residency expansion, according to Tyagi, changes that because enterprises will now be able to run workflows involving regulated or sensitive information as they can store that data in the specified region as per dedicated policies, directly benefitting banks, insurance companies, hospitals, and public sector bodies, which are heavily regulated. Enterprises will also benefit operationally, Tyagi pointed out: “Development teams will no longer have to strip or anonymize data just to stay compliant, and procurement teams can fast-track approvals because the storage architecture now aligns with localization requirements, specifically emerging markets, such as India, the UAE, and Australia.” Caveats for ChatGPT Enterprise and Edu customers OpenAI’s announcement to expand data residency from the US and Europe to new regions, such as the UK, Canada, Japan, South Korea, Singapore, India, Australia, and the UAE, comes with caveats. For ChatGPT Enterprise and Edu customers, the residency will only apply to new workspaces. Additionally, the expansion is only targeted for data that is stored or is at rest and not data that is being used for inference by a model, whose default location continues to be the US. “Enterprises will have to look through two different lenses, i.e., where their data is kept, and where their data is actually processed. This update works primarily on the first one, i.e., data-at-rest can now remain within the customer’s region…That means the moment a user interacts with the model, the prompt is temporarily processed on US-based infrastructure before the result is sent back,” Tyagi said. Compliance grey area might not be a deal breaker The data residency expansion could go a long way in helping enterprises despite the compliance complexity that OpenAI’s US-only inference residency policy poses, Tyagi said. “…I have had interactions where many commercial enterprises were stuck only because their data was being stored outside their jurisdiction. If deployed and executed properly, this alone solves 70–80% of the compliance friction that regulated enterprises were dealing with,” Tyagi said. “Most commercial regulated sectors primarily care about storage location, not the transient processing path.” On the flip side, though, the analyst warned that some enterprises and institutions, such as defence or government agencies, would remain cautious. “For them, even a temporary inference hop to the US is still a cross-border data flow. And OpenAI isn’t offering that level of isolation yet.” For enterprises that prefer to avoid any grey areas around compliance, they have the option of choosing OpenAI’s API Platform, which is now backed by an expanded slate of regional residency options. “For enterprises outside the US, the API policies basically mean that whatever you send through the API stays your data. OpenAI doesn’t use it to train models, and it isn’t stored long-term on their side,” Tyagi said, adding that the simplified policies around the API Platform could be the reason behind its popularity with enterprises. “…it fits real enterprise requirements and expectations, i.e., teams want to plug AI into their own systems, automate workflows, and keep all inputs and outputs inside their existing security and governance setup,” Tyagi said. However, there is also a caveat for API Platform users. “Enterprise Customers that have been approved for advanced data controls can enable regional data residency by creating a new Project in the API Platform dashboard and selecting their preferred region. Requests made through these Projects are handled in-region — model requests and responses are not stored at rest on OpenAI’s servers,” OpenAI wrote in a blog post. Scratching and clawing against hyperscalers OpenAI’s data residency expansion could help it break ground, at least in the data residency space, with hyperscalers, such as AWS, Microsoft, and Google, Tyagi said. “The likes of AWS, GCP, and Microsoft already offer in-region storage, sovereign cloud variants, and deeper IAM/identity integration. As of now, OpenAI cannot match any of that. But data-at-rest residency is undoubtedly the first piece of that stack,” Tyagi said. “It doesn’t provide OpenAI with sovereign compute, but it does bring them closer to the architectural expectations enterprises have when evaluating an AI provider alongside a cloud provider,” Tyagi added. Claude-provider Anthropic, too, only provides data-at-rest residency in the US. The model provider is reportedly exploring ways to offer the feature in India. However, it does process data outside the US. OpenAI is planning to expand its data residency to additional regions soon.
https://www.computerworld.com/article/4096675/openai-expands-data-residency-for-enterprise-customers...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 26 nov. - 14:17 CET
|








