Navigation
Recherche
|
What are AI agents and why are they now so pervasive?
jeudi 5 décembre 2024, 12:00 , par ComputerWorld
GenAI’s latest big step forward has been the arrival of autonomous agents, or AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. The key word here is “agency,” which allows the software to take action on its own. Unlike genAI tools — which usually focus on creating content such as text, images, and music — agentic AI is designed to emphasize proactive problem-solving and complex task execution. The simplest definition of an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task. In 2025, 25% of companies that use genAI will launch agentic AI pilots or proofs of concept, according to report by professional services firm Deloitte. In 2027, that number will grow to half of all companies. “Some agentic AI applications…could see actual adoption into existing workflows in 2025, especially by the back half of the year,” Deloitte said. “Agentic AI could increase the productivity of knowledge workers and make workflows of all kinds more efficient. But the ‘autonomous’ part may take time for wide adoption.” Tech companies large and small are rushing out genAI-based agents, including Microsoft, which last month announced it’s adding automated agents to M365 Copilot. Cisco unveiled agents for customer service in October; that same month, Atlassian unveiled its Rovo genAI assistant and Asana announced AI Studio, a tool that can be used to build agents. In other words, AI agents could soon be as pervasive as other genAI tools in the workplace. Agentic AI operates in two key ways. First, it offers specialized agents capable of autonomously completing tasks across the open web, in mobile apps, or as an operating system. A specific type of agentic AI, called conversational web agents, functions much like chatbots. In this case, the agentic AI engages users through multimodal conversations, extending beyond simple text chats to accompany them as they navigate the open web or use apps, according to Larry Heck, a professor at Georgia Institute of Technology’s schools of Electrical and Computer Engineering and Interactive Computing. “Unlike traditional virtual assistants like Siri, Alexa, or Google Assistant, which operate within restricted ecosystems, conversational web agents empower users to complete tasks freely across the open web and apps,” Heck said. “I suspect that AI agents will be prevalent in many arenas, but perhaps the most common uses will be through extensions to web search engines and traditional AI Virtual Assistants like Siri, Alexa, and Google Assistant.” Other uses for agentic AI A variety of tech companies, cloud providers, and others are developing their own agentic AI offerings, making strategic acquisitions, and increasingly licensing agentic AI technology from startups and hiring their employees rather than buying the companies outright for the tech. Investors have poured more than $2 billion into agentic AI startups in the past two years, focusing on companies that target the enterprise market, according to Deloitte. AI agents are already showing up in places you might not expect. For example, most self-driving vehicles today use sensors to collect data about their surroundings, which is then processed by AI agentic software to create a map and navigate the vehicle. AI agents play several other critical roles in autonomous vehicle route optimization, traffic management, and real-time decision-making — they can even predict when a vehicle needs maintenance. Going forward, AI agents are poised to transform the overall automated driving experience, according to Ritu Jyoti, a group vice president for IDC Research. For example, earlier this year, Nvidia released Agent Driver, an LLM-powered agent for autonomous vehicles that offers more “human-like autonomous driving.” IDC These AI agents are also finding their way into a myriad number of industries and uses, from financial services (where they can collect information as part of know-your-client (KYC) applications) to healthcare (where an agentic AI can survey members conversationally and refill prescriptions). The variety of tasks they can tackle can include: Autonomous diagnostic systems (such as Google’s DeepMind for retinal scans), which analyze medical images or patient data to suggest diagnoses and treatments. Algorithmic trading bots in financial services that autonomously analyze market data, predict trends, and execute trades with minimal human intervention. AI agents in the insurance industry that collect key details across channels and analyze the data to give status updates; they can also ask pre-enrollment questions and provide electronic authorizations. Supplier communications agents that help customers optimize supply chains and minimize costly disruptions by autonomously tracking supplier performance, and detecting and responding to delays; that frees up procurement teams from time-consuming manual monitoring and firefighting tasks. Sales qualification agents that allow sellers to focus their time on high-priority sales opportunities while the agent researches leads, helps prioritize opportunities, and guides customer outreach with personalized emails and responses, according to IDC’s Ryoti. Customer intent and customer knowledge management agents that can make a first impression for customer care teams facing high call volumes, talent shortages and high customer expectations, according to Ryoti. “These agents work hand in hand with a customer service representative by learning how to resolve customer issues and autonomously adding knowledge-based articles to scale best practices across the care team,” she explained. And for developers, Cognition Labs in March launched Devin AI, a DIY agentic AI tool that autonomously works through tasks that would typically require a small team of software engineers to tackle. The agent can build and deploy apps end-to-end, independently find and fix bugs in codebases, and it can train and fine tune its own AI models. Devin can even learn how to use unfamiliar technologies by performing its own research on them. Notably, AI agents also have the ability to remember past interactions and behaviors. They can store those experiences and even perform “self-reflection” or evaluation to inform future actions, according to IDC. “This memory component allows for continuity and improvement in agent performance over time,” the research firm said in a report. Other agentic AI systems (such as AlphaGo, AlphaZero, OpenAI’s Dota 2 bot) can be trained using reinforcement learning to autonomously strategize and make decisions in games or simulations to maximize rewards. Agentic AI software development Evans Data Corp., a market research firm that specializes in software development, conducted a multinational survey of 434 AI and machine learning developers. When asked what they most likely would create using genAI tools, the top answer was software code, followed by algorithms and LLMs. They also expect genAI to shorten the development lifecycle and make it easier to add machine-learning features. GenAI-assisted coding allows developers to write code faster — and often, more accurately — using digital tools to create code based on natural language prompts or partial code inputs. (Like some email platforms, the tools can also suggest code for auto-completion as it’s written in real time.) By 2027, 70% of professional developers are expected to be using AI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain — a significant increase from approximately 15% early last year, Gartner said. One of the top tools used for genAI-automated software development is GitHub Copilot. It’s powered by genAI models developed by GitHub, OpenAI (the creator of ChatGPT), and Microsoft, and is trained on all natural languages that appear in public repositories. GitHut combined multiple AI agents to enable them to work hand-in-hand to solve coding tasks; multi-agent AI systems allow multiple applications to work together on a common purpose. For example, GitHub earlier this year launched Copilot Workspace, a technical preview of its Copilot-native developer. The multi-agent system allows specialized agents to collaborate and communicate, solving complex problems more efficiently than a single agent. With agentic AI coding tools like Copilot Workspace and code-scanning autofix, developers will be able to more efficiently build software that’s more secure, according to a GitHub blog. The technology could also give rise to less positive results. AI agents might, for example, be better at figuring out online customer intent — a potential red flag for users who have long been concerned about security and privacy when searching and browsing online; detecting their intent could reveal sensitive information. According to Heck, AI agents could help companies understand a user’s intent more precisely, making it easier to “monetize this data at higher rates. “But this increased granularity of knowledge of the user’s intent can also be more likely to cause security and privacy issues if safeguards are not put in place,” he said. And while most agentic AI tools claim to be safe and secure, a lot depends on the information sources they use. That’s because the source of data used by the agents could vary — from more limited corporate data to the wide open internet. (The latter has a tendency to affect genAI outputs and can introduce errors and hallucinations.) Setting guardrails around information access, can act like a boss and set limits on agentic AI actions. That’s why user education and training are critical in the secure implementation and use of AI agents and copilots, according to Anderw Silberman, director of marketing at Zenity, a banking software provider. “Users need to understand not just how to operate these tools, but also their limitations, potential biases, and security implications,” Silberman wrote in a blog post. Training programs should cover topics such as recognizing and reporting suspicious AI behavior, understanding the appropriate use cases for AI tools, and maintaining data privacy when interacting with AI systems.”
https://www.computerworld.com/article/3617392/what-are-ai-agents-and-why-are-they-now-so-pervasive.h
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 18 déc. - 22:16 CET
|