MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
kong
Recherche

3 key features in Kong AI Gateway 3.10

mercredi 2 avril 2025, 15:00 , par InfoWorld
3 key features in Kong AI Gateway 3.10
Kong AI Gateway 3.10 introduces new capabilities designed to help organizations govern their generative AI usage more effectively, reduce hallucinations from large language models (LLMs), and boost developer productivity. We achieve this while keeping sensitive data protected and safe.

As enterprises rush to operationalize generative AI and LLMs across workflows, the challenge has shifted from access to orchestration. Platform teams are now tasked with making AI usage secure, scalable, and cost-effective, without sacrificing or slowing down developer velocity. Kong’s AI Gateway addresses this problem by unifying governance, observability, and LLM integration under one API-aware platform.

Here are three new features in Kong AI Gateway 3.10 that help teams simplify, scale, and secure their AI development workflows.

Reduce hallucinations with Automated RAG

Large language models are only as good as the data they’re exposed to. When LLMs generate confident but inaccurate answers, most commonly referred to as “hallucinations,” it’s often because they lack access to current or domain-specific knowledge. Retrieval-augmented generation (RAG) is a widely adopted approach to fix this by supplementing the model with up-to-date, relevant data pulled from vector databases.

However, traditional RAG workflows are complex and developer heavy. They typically require teams to generate embeddings, store data in vector databases, and build logic to enrich each prompt with relevant context.

Kong’s new AI RAG Injector plugin automates this entire process. Instead of relying on developers to implement RAG logic in every app, platform teams can now inject vetted data into prompts directly at the gateway layer. Embeddings are generated automatically, relevant data is fetched, and prompt enrichment happens inline, and without manual intervention.

Kong

This visual explains the automation process across data embedding, vector lookups, and prompt association, all coordinated to reduce LLM hallucinations and ensure higher-quality responses.

By shifting RAG into the platform layer, Kong enables organizations to deliver higher quality AI responses while enforcing consistency, security and improving cost-efficiency. It’s one of the fastest ways to operationalize RAG across teams at scale.

Protect sensitive data with PII sanitization

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases.

Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user.

PII sanitization in Kong supports 20+ categories of personal data in 12 languages and is compatible across most major AI providers. It runs in a private, self-hosted container for performance and compliance, ensuring organizations can retain control over their data without compromising speed or developer experience.

Kong

This diagram shows how Kong AI Gateway sanitizes data inline between agents and models, providing both security and performance.

With PII policies enforced at the gateway level, teams no longer have to embed custom logic into each service or agent. This simplifies rollout, strengthens compliance, and gives platform teams the control they need to confidently scale AI across the organization.

Centralize AI orchestration with platform-wide control

As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations.

Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance.

This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. For example, low-complexity prompts can go to cheaper models, while high-value tasks route to premium providers. This is especially helpful for companies using multiple LLMs for different use cases, allowing them to optimize for both performance and budget.

Kong

This visual outlines the breadth of Kong AI Gateway features, including LLM orchestration, load balancing, prompt management, and more.

Additionally, Kong now supports pgvector, extending semantic capabilities like routing, caching, and guardrails to Postgres-based databases. This gives platform teams more flexibility when designing AI pipelines within existing cloud-native environments like AWS Relational Database Service or Azure Cosmos DB.

A unified AI control plane

By combining governance, performance, and developer flexibility into one unified platform, Kong AI Gateway 3.10 provides a critical control plane for enterprises embracing generative AI. Whether it’s reducing hallucinations, protecting sensitive data, or orchestrating multiple LLMs at scale, Kong helps teams move faster and safer with AI.

For a full list of features and documentation, visit the Kong AI Gateway product page.

Marco Palladino is an inventor, software developer, and internet entrepreneur. As the CTO and co-founder of Kong, he is Kong’s co-author, responsible for the design and delivery of the company’s products, while also providing technical thought leadership around APIs and microservices within both Kong and the external software community. Prior to Kong, Marco co-founded Mashape in 2010, which became the largest API marketplace and was acquired by RapidAPI in 2017.



New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
https://www.infoworld.com/article/3951984/3-key-features-in-kong-ai-gateway-3-10.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
ven. 4 avril - 03:55 CEST