|
Navigation
Recherche
|
Building a golden path to AI
lundi 27 octobre 2025, 10:00 , par InfoWorld
It’s clear your company needs to accelerate its AI adoption. What’s less clear is how to do that without it being a free-for-all. After all, your best employees aren’t waiting on you to establish standards; they’re already actively using AI. Yes, your developers are feeding code into ChatGPT regardless of any policy you may be planning. Recent surveys suggest developers are adopting AI faster than their leaders can standardize it; that gap, not developer speed, is the real risk.
This creates what Phil Fersht calls an “AI velocity gap”: the chasm between teams frantically adopting AI to win and central leadership dithering over the risk of getting started. Sound familiar? It’s “shadow IT” all over again, but this time it’s powered by your data. I’ve written about the hidden costs of tech sprawl, whether it was unfettered developer freedom leading to unmanageable infrastructure or the lure of multicloud turning into a morass of interoperability nightmares and cost overruns. When every developer and every team picks their own cloud, their own database, or their own SaaS tool, you don’t get innovation—you get chaos. This may be the status quo, but it’s a recipe for failure. What’s the alternative? The problem with official platforms The temptation for a platform team is to see this chaos and react by building a gate. “Stop! No one moves forward until we have built the official enterprise AI platform.” They’ll then spend 18 months evaluating vendors, standardizing on a single large language model (LLM), and building a monolithic, prescribed workflow. Good luck with that. By the time they launch that one true platform to rule them all, it will be hopelessly obsolete. Heck, at the current pace of AI, it risks obsolescence before adoption. The model they standardized on will have been surpassed five times over by newer, cheaper, and more powerful alternatives. Their developers, long since frustrated, will have routed around the platform entirely, using their personal credit cards to access the latest APIs, creating a massive, unsecured, unmonitored blind spot right in the heart of the business. Trying to build a single, monolithic gate for AI won’t work. The landscape is moving too fast. The needs are too diverse. The model that excels at summarizing legal documents is terrible at writing Python. The model that’s great for marketing copy can’t be trusted with financial projections. Even within engineering, the model that’s brilliant at refactoring Java is useless for writing K8s manifests. The problem, however, isn’t the desire for a platform; it’s the definition of one. From prescribed platforms to composable products Bryan Ross recently wrote a great post on “golden paths” that perfectly captures this dilemma. (It builds on other, earlier arguments for these so-called golden paths, like this one on the Platform Engineering blog.) He argues that we need to shift our thinking from “gates” to “guardrails.” The problem, as he sees it, is that platform teams often miss the mark on what developers actually need. As Ross writes: “Most platform teams think in terms of ‘the platform’—a single, cohesive offering that teams either use or don’t. Developers think in terms of capabilities they need right now for the problem they’re solving.” So how do you balance those competing interests? His suggestion: “Platform-as-product thinking means offering composable building blocks. The key to modular adoption is treating your platform like a product with APIs, not a prescribed workflow.” Ross nails the problem. Now what do we do about it? Instead of asking a committee to pick the model, platform teams should instead build a set of services or composable APIs that channel developer velocity. In practice, this all starts with a de facto interface standard. One de facto standard is the OpenAI-style API, now supported by multiple back ends (e.g., vLLM). This doesn’t mean you bless a single provider; it means you give teams a common contract, probably fronted by an API gateway, so they can swap engines without rewriting their stack. That gateway is also the perfect place to enforce structured outputs as a rule. “Just give me some text” is fine for a demo but won’t work in production. If you want durable integrations, standardize on JSON-constrained outputs enforced by schema. Most modern stacks support this, and it’s the difference between a cute demo and a production-ready system. This same gateway becomes your control plane for observability and cost. Don’t invent a new “AI log”; instead use something like OpenTelemetry’s emerging genAI semantic conventions so prompts, model IDs, tokens, latency, and cost are traceable in the same tools site reliability engineers already run. This visibility is precisely what enables effective cost guardrails. The critical bedrock of all this is data access governance. This is an area where you need to be resolute, keeping identity and secrets where they already live. Require runtime secret retrieval (no embedded keys) and unify authorization to your enterprise identity and access management. The goal is to minimize new attack surfaces by absorbing AI into existing, hardened patterns. Finally, allow exits from the golden path but with obligations: extra logging, a targeted security review, and tighter budgets. As Ross recommends, build the override into the platform, such as a “proceed with justification” flag. Log these exceptions, review them weekly, and use that data to evolve the guardrails. Platform as product, not police Why does this “guardrails over gates” posture work so well for AI? Because AI’s moving target makes centralized prediction a losing strategy. Committees can’t approve what they don’t yet understand, and vendors will change from under your standards document anyway. Guardrails make room to safely learn by doing. This is what smart enterprises already learned from cloud adoption: Productive constraints beat imaginary control. As I’ve argued, carefully limiting choices enables developers to focus on innovation instead of the glue code that becomes necessary after development teams build in diverse directions. This is doubly true with AI. The cognitive load of model selection, prompt hygiene, retrieval patterns, and cost management is high; the platform team’s job is to lower it. Golden paths let you move at the speed of your best developers while protecting the enterprise from its worst surprises. Most importantly, this approach meets your organization where it is. The individuals already experimenting with AI get a safe, fast on-ramp that doesn’t feel like a checkpoint. Platform teams get the compliance, visibility, and cost controls they need without feeling stymied by process. And leadership gets the one thing enterprises are starved for right now: a way to turn a thousand disconnected experiments into a coherent, measured, and governable program.
https://www.infoworld.com/article/4079018/building-a-golden-path-to-ai.html
Voir aussi |
56 sources (32 en français)
Date Actuelle
lun. 27 oct. - 21:15 CET
|








