Navigation
Recherche
|
MCP server announced for JFrog supply chain management platform
jeudi 17 juillet 2025, 15:16 , par InfoWorld
Software supply chain management provider JFrog has become the latest vendor to release a Model Context Protocol (MCP) server for its platform so developers can securely link Large Language Models (LLMs) and AI agents to tools and data sources.
“Until recently, connecting AI agents to diverse enterprise systems created development bottlenecks, with each integration requiring custom code and ongoing maintenance,” noted company CTO Yoav Landman. JFrog’s cloud-based MCP remote server acts as an API that lets developers connect an LLM such as Microsoft Copilot or Cursor to its platform and allow an employee to speak to it in plain English. “It opens the gate for many systems to be exposed to LLMs,” he said. For example, JFrog developers can tell the platform, “Create a new local repository,” or ask, “Do we have this package in our organization?” The company says developers can get immediate visibility into open-source vulnerabilities and package usage, either inside or outside their organization, without context switching. AI-powered automation also helps simplify complex queries that previously required advanced developer knowledge, helping all teams develop smarter, faster, and at scale in less time, the company says. Launching today in beta, the MCP server will be in full production “in a few weeks,” Landman said, and is free to JFrog customers. “We are seeing a huge move towards LLM-enabled development environments,” he noted. “By allowing developers straight from our IDE (Integrated Development Environment) to write in plain English what they need, and executing, is what customers want, because it offloads a lot of administration and empowers developers to be more productive.” On the security front, the company said, MCP Server for JFrog offers: secure OAuth 2.1 Authentication that enforces token-based authorization; essential tools for gaining software package insights. With the tools, users can create and manage projects, repositories, view build status, and query detailed package and vulnerability information; production-grade monitoring, including comprehensive logging and event tracking for insights into tool usage. The ‘fastest adoption of a standard’ Released last November by Claude AI developer Anthropic, the MPC protocol provides a standardized way to connect AI models to different data sources and tools. Developers can create their own MCP servers, or get one from a vendor that’s tailor-made for connecting particular applications. The protocol offers a growing list of pre-built integrations that an LLM can directly plug into, and provides the flexibility to switch between LLM providers and vendors. Software providers like it: This week at least five vendors, including JFrog, have introduced new MCP servers. Rowan Curran, a Forrester Research principal analyst who monitors data science, machine learning and AI technologies, said in an interview, “it’s the fastest adoption of a standard I’ve ever seen. There’s a new one every day.” Somewhat competing against this protocol is Google’s Agent2Agent Protocol (A2A), announced in April. Google says A2A complements MCP by addressing the challenges of deploying large-scale, multi-agent systems across platforms and cloud environments. As Curran explained, MCP is more focused on how to link an agent to tools and data, while A2A is focused on how to build multi-agent orchestrations and workflows. A2A is supported by Atlassian, Box, Cohere, Intuit, Langchain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, UKG and Workday, as well as by service providers such as Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, and Wipro. Move cautiously Despite the enthusiasm for MCP, Curran says, and security improvements in the protocol — recently it added support for OAuth for user authentication — there are still concerns that CSOs, infosec pros, and developers have to deal with. One of them is validation of the data that a user calls up through the server, because AI is still prone to hallucinations, he said. Developers should pay attention to the MCP framework and best practices, he said. For example, Anthropic says MCP servers: should prioritize user privacy protection. Developers should take care to responsibly handle personal data, follow privacy best practices, and ensure compliance with applicable laws; MCP servers should only collect data from the user’s context that is necessary to perform their function. They should not collect extraneous conversation data, even for logging purposes. Despite the eagerness of developers to leverage MCP, and the growing number of vendors launching MCP servers, Curran urged CSOs and infosec leaders to move cautiously. “It’s not been in the wild long enough to clearly see the broad range of potential attacks,” he warned. “There are still lingering questions about scalable authentication and data integrity and stuff like that, so keeping MCP servers operating within your enterprise environment is a safer path to go down right now, rather than trying to call out to some vendor’s external MCP server that exists outside of your security environment.” Because it’s still a new protocol, many vendors are implementing MCP servers not as broad-based front ends to their products facing externally onto the web so any customer can call them, but as internal servers facing the rest of the customers’ platforms, he said. In short, he said, look at whether an MCP server can exist within a firewalled and fully-authenticated environment, rather than making an external call to an external SaaS service. And make sure to include the MCP server in threat modelling and penetration tests. “Take a more exploratory approach,” he urged security pros, “versus trying to get an MCP server out the door quickly.”
https://www.infoworld.com/article/4023652/mcp-server-announced-for-jfrog-supply-chain-management-pla...
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 18 juil. - 00:16 CEST
|