MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
training
Recherche

5 ideas to help bridge the genAI skills gap

mardi 14 octobre 2025, 13:00 , par ComputerWorld
Generative AI has gone from curiosity to core capability in less than two years. Companies across sectors now face an urgent skills shortage — not just of AI specialists, but of employees who can use generative AI tools in everyday work.

In industries ranging from IT services and software development to staffing, travel, and industrial solutions, technology leaders are experimenting with new ways to prepare their people for an AI-first economy. Taken together, their experiences highlight five big ideas for closing the skills gap.

Shift the focus from skills to mindsets

At UST, a global digital services company with over 32,000 employees, Krishna Prasad, CIO and chief strategy officer, argues that the rise of generative AI presents an existential challenge. Expertise, historically UST’s value proposition, has become commoditized.

“Customers can now access baseline knowledge for free, anytime,” Prasad said. “Clients won’t pay for basic skills anymore. What matters now is problem solving.”

Instead of focusing narrowly on technical skills, UST has shifted its training toward cultivating adaptable mindsets.

“We want to develop curiosity, critical thinking, and creativity — skills that aren’t easily replaced by AI,” said Prasad, stressing that traditional classroom-style learning is insufficient when the competitive environment demands experimentation and rapid application. Employees are given access to a range of AI tools such as GitHub Copilot, Google Gemini, and Cursor, and encouraged to experiment safely in R&D environments.

UST built an R&D sandbox, an internal environment where employees can freely try out AI tools and models without risking production systems. This gives people the freedom to explore AI’s potential without fear of breaking something mission-critical.

While traditional recorded training modules still exist, Krishna stressed that experimentation and problem-solving are what really move the needle. The sandbox helps shift learning away from theory and into practice. The experimentation environment isn’t just technical — it’s cultural. It nudges employees to get comfortable with trial, error, and iteration, which are essential for adapting to AI’s rapid evolution.

The company frames continuous adoption through its “Take Flight with AI” program, which consists of three stages — taxi, takeoff, and cruise. Success is directly tied to outcomes: Are employees applying AI in client engagements? Are they participating in hackathons? Are they delivering real value with agentic AI use cases?

Key lesson: The essential challenge is getting employees to take personal responsibility for continuous learning.

Integrate learning into the flow of work

According to Jill Busch, director of learning and development for North America at ManpowerGroup, traditional training is too slow for the rapid evolution of AI.

Rather than pulling people out of their daily job for separate training sessions, the company embeds training directly into daily workflows at the points where people are likely to be confronted with the need for learning material. Digital adoption platforms like Whatfix provide in-system nudges and tips directly in the tools recruiters use, guiding them in real time.Recruiting system training is integrated within the application. Users don’t know they’re interacting with a digital coach that’s training them to use the system and its AI features, such as candidate sourcing, resume analysis, and client outreach, effectively. According to Busch, the payoff is measurable: “How-to” support questions have been reduced 95% since implementing workflow learning.

Busch emphasizes agility over polish, using what she calls “dirty design.” Training modules are developed using AI-powered tools, deployed quickly, and iterated frequently to keep pace with technology. Micro-learning modules and embedded AI simulations deliver short, consumable lessons inside daily platforms, reinforcing skills without disrupting productivity.

“Built-in feedback loops let employees flag issues or suggest improvements from within their workflow, which allows us to refine training continuously,” Busch said.

Ethical AI guidelines are integrated from the outset, and onboarding for all new employees includes AI training as a baseline requirement.

Key lesson: AI training succeeds when it’s embedded in daily workflows, giving employees real-time support and guardrails so they learn while doing their jobs.

Scale structured programs

Sudhir Mehta, global VP for IoT and Edge Computing Solutions at Lexmark (a subsidiary of Xerox), leads one of the most structured training initiatives: the company’s AI Academy. What began with five data scientists has scaled to more than 5,000 employees through modular curricula, mentorship, and partnerships with institutions such as North Carolina State University and Microsoft. Graduates of the AI Academy become mentors, sustaining the momentum, which executives reinforce by sharing internal success stories.

“For pro-code development, we use the SAFe process, which aligns executive strategy with development across all teams,” Mehta explained. “Every quarter, we conduct PI planning sessions so that objectives are clearly cascaded from executives down to developers.”

Training is delivered through hybrid models — self-paced e-learning for scale, synchronous workshops for engagement, and longer tracks like the 20-week AI Excellence Program. Ethics and governance board members co-lead many sessions, embedding responsible use into the curriculum.

Success is measured through ROI on enterprise-scale projects for developers and on adoption data, including Microsoft Copilot usage, among no-code users. One way of managing costs is to run pilot programs to demonstrate ROI before scaling.

Key lesson: Structured, executive-aligned programs with a mentor loop can scale AI literacy to thousands.

Build an AI-first culture

At eSky Group, a global online travel agency, AI is no longer optional. According to Tomasz Lis, engineering manager and AI advocate, it’s central to how the company operates.

With 800 employees, eSky adopted a community-driven model. Every employee has access to enterprise-grade tools such as Gemini Pro, GitHub Copilot, and Claude Code. More than half use them daily.

“We’re moving toward an AI-First model, where using AI is as natural as using email or a browser,” Lis said.

The culture is reinforced by grassroots initiatives. AI ambassadors mentor colleagues, a Slack community shares best practices, and “Demo Fridays” showcase internal projects. Training is layered, combining vendor content, external services from cloud providers, and peer-driven learning.

The results are clear: Marketing now generates video in minutes, customer service analyzes every interaction, and developers collaborate across stacks with AI support.

Key lesson: Cultural readiness and peer-led communities are as critical as structured training.

Embed AI in the development lifecycle

At Globant, a $2 billion IT services firm, Juan José López Murphy, head of AI, describes the challenge bluntly: “The way software is built has fundamentally changed. If we don’t upskill, we risk becoming obsolete.”

Globant’s strategy is to embed AI not just into coding, but into every step of the software development lifecycle — planning, testing, monitoring, and deployment. Developers are trained to be fluent in all major large language models (LLMs) — ChatGPT, Claude, Gemini, Bedrock — and to use proprietary agents like Globant GPT for client projects.

The training itself is layered. Basic prompting covers clarity of objectives, context framing, and avoiding “context rot.” Context rot occurs when conversations with AI agents grow too long or unfocused, causing the system to lose track of instructions and generate lower-quality output. Developers are trained to recognize when to reset or reframe prompts so the AI can stay aligned with goals.

Intermediate skills include retrieval-augmented generation (RAG), reflection, and orchestration. RAG trains employees to ground AI outputs in reliable sources, teaching them how to connect LLMs with company data or external knowledge bases.

Reflection develops critical thinking, as learners practice prompting AI to review and improve its own outputs rather than accepting them at face value. Orchestration builds systems thinking by training people to coordinate multiple AI agents with distinct roles, such as planner, coder, and tester, to solve complex tasks collaboratively.

Advanced work is handled by specialists in prompt optimization. Training at this level emphasizes experimentation with structure, wording, and sequencing to achieve consistent, high-quality outputs across different contexts.

Developers participate in hackathons where they solve client-like challenges with AI, and competitions where teams demonstrate speed and accuracy improvements using code generation agents. Practical guides — such as “Ten Rules for Effective AI Use” — reinforce best practices, while proprietary tools like “Neo inside Coda” embed lessons into daily workflows. These activities keep developers motivated and build habits of experimentation and collaboration.

Key lesson: Success comes not just from using AI tools, but from training teams to apply them effectively at every stage of development.

A checklist for planning genAI training

To build an effective genAI training program, IT leaders and L&D managers should focus on these essentials:

Budget & ROI – Account for licensing, course costs, and time off desk; use pilot programs to prove value before scaling.

Curriculum Design – Blend vendor content, academic partnerships, and custom modules tailored to business roles.

Delivery Models – Mix self-paced modules, live workshops, workflow nudges, and hackathons to balance scale with engagement.

Governance & Ethics – Embed responsible AI use by involving ethics and governance leaders in training.

Measurement – Track business outcomes such as productivity, ROI, and client satisfaction — not just course completions.

Culture & Mindset – Identify and support ambassadors. Build communities of practice and safe sandboxes, while fostering adaptability and critical thinking.

Executive Alignment – Ensure leaders sponsor programs, cascade goals into planning processes, and champion success stories.

Onboarding & Continuous Learning – Make AI literacy part of new-hire programs and sustain momentum through mentoring, refresh cycles, and encouraging employees to take personal responsibility for learning.

Generative AI is transforming how organizations work, but the workforce challenge is daunting. As these five companies show, there’s no single model for success — some rely on culture, others on structure — but all converge on a common set of principles: training must be embedded in work, driven by outcomes, and continually refreshed.

The winners will not be those who teach the most skills, but those who cultivate the most adaptable, AI-confident employees. Done right, closing the genAI skills gap can turn a looming talent shortage into a durable competitive advantage.

More on upskilling:

CIOs get serious about closing the skills gap — mainly from within

IT leaders: What’s the gameplan as tech badly outpaces talent?

How to discover hidden tech talent in your organization

Balancing hard and soft skills: the key to high-performing IT teams
https://www.computerworld.com/article/4055753/5-ideas-to-help-bridge-the-genai-skills-gap.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mer. 15 oct. - 00:24 CEST