Navigation
Recherche
|
Enterprises are getting worse at multicloud
vendredi 4 avril 2025, 11:00 , par InfoWorld
In the initial years of multicloud adoption, enterprises took cautious, calculated steps to build and manage infrastructures that spanned multiple cloud providers. The objective was flexibility, performance optimization, and risk mitigation. Yet, in 2025, enterprises are struggling more than ever to operate effective multicloud environments. Why? The rapid movement toward AI systems, combined with a lack of thoughtful planning, has overwhelmed management strategies and pushed many organizations to the brink of multicloud madness.
The integration of AI-driven workloads into enterprise cloud strategies has become a dominant force shaping cloud investment and architecture decisions. However, most businesses underestimate the complexity this adds. In particular, the introduction of GPU-focused cloud providers, such as CoreWeave and others, has dramatically altered the landscape. These clouds promise specialized performance for AI workloads but operate with unique requirements that many enterprises are unprepared to handle. Moving to AI systems without a plan The surge in AI adoption brought many transformative benefits: smarter decision-making, automation, personalized customer experiences, and competitive differentiation. However, enterprises are implementing AI systems without understanding how these technologies integrate into existing multicloud strategies. Here are five of the most significant challenges: AI workloads require expensive and resource-intensive GPUs. Traditional multicloud strategies center on conventional cloud computing and storage needs, and don’t account for this hardware disparity. Enterprises now juggle incompatible platforms between general-purpose clouds and specialized GPU clouds, which often lack tools for seamless integration. AI workloads demand massive amounts of data for training and inference. Enterprises are realizing too late that placing data and AI workloads on different clouds creates inefficiencies. Moving data between clouds isn’t cheap, and latencies add further complexity, leading to performance degradation. Each cloud vendor has its own set of management systems, APIs, and operational frameworks. This goes for GPU-focused clouds also. Enterprise IT teams are struggling to standardize operations across increasingly disparate environments. The lack of upfront planning often results in spiraling costs. Enterprises are overprovisioning GPUs, underutilizing cloud resources, and failing to identify opportunities to optimize their multicloud strategy. IT teams lack the expertise to manage AI-centric cloud environments. Legacy multicloud strategies didn’t prioritize the unique demands of AI systems, and upskilling takes time that many organizations lack. Enterprises are often blindsided by what their IT teams don’t know about deploying AI models. How GPU clouds add complexity GPU-focused cloud providers such as CoreWeave, Lambda Labs, and others have risen to prominence by optimizing their offerings for artificial intelligence and machine learning workloads. They’ve become essential as traditional hyperscalers (AWS, Microsoft Azure, Google Cloud Platform) struggle to meet the exponential demand for GPUs. However, their entry into the multicloud ecosystem creates new challenges: Enterprises often get locked into specialized contracts because GPU clouds operate under different economic models, which makes portability tricky. Traditional cloud orchestration tools generally don’t provide seamless support for GPU clouds, leading to operational silos. Many enterprises find it challenging to coordinate GPU workloads between hyperscale providers and GPU-specialized providers, creating gaps in performance and observability. Without strategic planning, enterprises that adopt GPU clouds risk further fragmenting their multicloud ecosystems, which complicates AI-driven initiatives instead of enabling them. Enterprises need to get a clue The rapid adoption of AI is throwing enterprises into multicloud chaos. Multicloud environments were already complex, but the introduction of GPU-focused clouds for AI workloads has added extreme operational and architectural challenges. Many enterprises are moving too fast, launching AI initiatives without aligning them to their existing cloud strategies, and the results are predictable: siloed systems, uncontrolled costs, and operational inefficiencies. The root of the problem lies in poor planning. Companies continue to underestimate how AI workloads change multicloud dynamics. GPU-centric clouds require unique approaches to integration, data placement, and cost management. Without proper strategies, enterprises are creating fractured infrastructures that are nearing unmanageable levels of complexity. How to avoid multicloud failure Start by developing a clear AI-focused multicloud strategy. This means assessing the current environment, deciding which workloads belong on hyperscalers versus GPU providers, and aligning infrastructure with goals and budgets. Hybrid models can work, but only with deliberate planning to avoid creating silos. Standardization is also crucial. Centralized orchestration tools such as Kubernetes can streamline the deployment and scaling of containerized AI workloads across diverse platforms. Without standardization, operational silos will only grow. Another critical step is to reevaluate data placement strategies. AI workloads depend on massive data sets, and choosing the wrong data location can lead to high transfer costs and latency issues. Enterprises need to store data closer to GPU resources and partition it strategically to optimize performance and minimize costs. Additionally, cost management must take center stage. Partnering with finops teams can help organizations control GPU cloud costs by preventing overprovisioning and analyzing billing trends. Without tight financial oversight, AI workloads can quickly spiral out of budget. Finally, upskilling IT teams is non-negotiable. AI introduces new concepts such as MLOps, GPU management, and intercloud orchestration, areas where many IT departments still lack expertise. Training staff in these skills will help ensure efficient management of AI-focused multicloud environments and help avoid debilitating operational bottlenecks. Enterprises must redefine their approach to multicloud if they want AI to succeed. The right answers lie in a deliberate strategy, standardized operations, smarter financial planning, and focused upskilling. Follow these steps to unlock AI’s potential without drowning in its complexity. Enterprises that act now will be the ones that thrive tomorrow.
https://www.infoworld.com/article/3953065/enterprises-are-getting-worse-at-multicloud.html
Voir aussi |
56 sources (32 en français)
Date Actuelle
lun. 14 avril - 14:04 CEST
|