MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
data
Recherche

AI’s biggest supply chain shortage is people

lundi 6 octobre 2025, 11:00 , par InfoWorld
Your biggest risk in AI isn’t prompt injection or data poisoning: it’s people—specifically, a shortage of talented people. If your organization can’t field enough staff who know how to apply AI to your actual business, you will spend a lot of money on tech that will take you absolutely nowhere. None of this is new: During the cloud computing wave and the first big-data boom (and really, every technology trend), we saw technology race ahead while talent lagged.

The smartest companies are skipping the hiring arms race and turning their existing engineers, analysts, architects, and developers into AI producers. Why? Because domain knowledge is always the key to unlocking technology, and the people with that knowledge already work for you, as Gartner analyst Svetlana Sicular’s observed years ago: “Organizations already have people who know their own data better than mystical data scientists.”

Enterprises can further accelerate internal talent by leveraging the technologies they already know. The point isn’t to avoid learning anything new; it’s to minimize context switching so more of your staff can contribute. That’s why the “bring AI to your data” guidance matters. If your core systems run on relational databases, for example, use features that let teams keep using SQL while adding embeddings, vector similarity, and JSON/document patterns where they help.

Let’s look at how this can work.

The human bottleneck

Wage premiums are a blunt but effective measure of scarcity, and by that metric, the AI skills gap is glaring. Lightcast analyzed 1.3 billion job postings and found a 28% salary premium—nearly $18,000 more per year—for roles that ask for AI skills. PwC’s global analysis is even starker: Workers with AI skills command an average 56% wage premium across industries. These aren’t hype slides; they’re what CFOs see in the payroll data.

Meanwhile, demand is outpacing formal enablement inside companies. A year ago, Microsoft’s 2024 Work Trend Index (based on 31,000 workers across 31 countries plus Microsoft 365 and LinkedIn data) reported that 75% of knowledge workers are already using AI at work, but only 39% have received employer-provided training. Two-thirds of leaders say they wouldn’t hire someone without AI skills. That’s a recipe for shadow AI, uneven results, and mounting risk.

None of this is new in kind, only in degree. A decade ago, Gartner found talent shortages were the most significant barrier to adopting emerging technologies, more constraining than cost or security—and that was before generative AI made every business process a candidate for automation or augmentation.

Think back to the cloud transition. In 2012, IDC (in a Microsoft-sponsored study) forecast 7 million to 14 million cloud-related jobs by 2015 and estimated 1.7 million cloud roles would go unfilled because candidates lacked the skills. In response, universities adjusted to train more students, but the fastest movement came from employers and workers upskilling in place—certifications, internal academies, and systems teams who learned to run cloud.

Big data followed the same trajectory. In 2011 McKinsey projected a US shortage of 140,000 to 190,000 workers with “deep analytical talent,” plus a need for 1.5 million managers and analysts able to understand and make decisions based on data. Rather than wait for a new species of data scientist to materialize, Gartner’s Sicular urged companies to develop the people they already had, a point I later echoed: Businesses harbor big-data desires but lack know-how. The fix is training. Replace “Hadoop” with “large language models” and the counsel still holds.

Part of the reason this works is that ultimately we’re not really grappling with technologies, but rather more human concerns. For example, during the cloud rush, companies learned that the skill you really need is not “AWS” or “Azure,” but how to operate distributed systems reliably with your constraints. During the big-data phase, the real skill wasn’t “Hadoop,” but how to ask better questions of your data and deploy the answers. AI is no different. The durable advantage isn’t a specific model; it’s a workforce that knows when AI helps, how to wire it into the business, and how to keep it safe and measurable over time.

Unlocking the talent

The fastest, lowest-risk way to get AI capacity is to build it into your current team, leveraging the technologies they already know. This isn’t wishful thinking; industry leaders are already doing it. If you’re an IT decision-maker, this should shape both your talent plan and your tech plan:

Bias toward upskilling. Make AI literacy and safe-use patterns table stakes for everyone in technology, not just machine learning specialists. The payoff is compressing the time between idea and shipped value more than just filling roles. Make sure everyone can answer four questions: What problems are we trying to solve with AI? What data and guardrails do we need? How do we evaluate outputs? How do we run this in production? You don’t need an army of PhDs for that; you need disciplined engineers and analysts who understand your business and can learn the tools. (I’ve written before about how AI shifts developer work by expanding it. Train for that expansion).

Exploit the stack you already know. If talent is scarce, the stack is a force multiplier. Gartner projects that by 2028, 80% of genAI business applications will be developed on existing data management platforms, not on greenfield AI stacks. That aligns with common sense. You’ll go faster and involve more of your current staff if you bring AI to your data and systems, rather than ripping and replacing for novelty’s sake.

Use skills you already have to wire in AI. Look at where your teams are strong today (SQL, data modeling, production discipline). For example, SQL is still one of the most widely used languages among professional developers; Stack Overflow’s 2025 survey shows 61% of pros use SQL, and it’s 62% among professionals who use AI tools. That means you can anchor early AI wins in the patterns your teams already know: queries, joins, access controls, lineage, and service-level agreements—now augmented with embeddings, vector search, and retrieval.

Does this sound unglamorous compared to spinning up a bespoke model stack? Good. AI’s business value is unglamorous by nature. It’s retrieval over the right data, sensible workflows, and a feedback loop that improves outcomes. It’s the boring bits.

Remember, boring is good! You have far more SQL-fluent developers, data engineers, and developers than you have ML engineers. Teaching a SQL-first team to use in-database vectors and retrieval is a lighter lift than asking them to master an entirely new data stack. In the same way, “AI-ifying” your runbooks for backups, failover, authz, etc., already exist in your current platforms.

None of this is an argument against specialist tools. Some workloads will justify them. But for most enterprises, the returns come from applying AI to existing processes and data with existing people—faster, safer, and cheaper than ripping and replacing.
https://www.infoworld.com/article/4067902/ais-biggest-supply-chain-shortage-is-people.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
lun. 6 oct. - 22:09 CEST