MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
portkey
Recherche

Portkey: An open-source AI gateway for easy LLM orchestration

jeudi 6 mars 2025, 10:00 , par InfoWorld
The explosion of open-source AI frameworks has given developers unprecedented flexibility in deploying AI models. Portkey, an open-source AI gateway, simplifies AI model orchestration by providing a unified API for multiple AI providers, reducing friction in integrating models into applications. In addition to large language models (LLMs), Portkey supports vision, audio (text-to-speech and speech-to-text), image generation, and other multi-modal generative AI models.

This article explores how Portkey AI Gateway streamlines AI model deployment, manages API interactions, and ensures scalability while focusing on the capabilities demonstrated in the provided code examples.

Project overview – Portkey AI Gateway

Portkey AI Gateway is an open-source project and hosted service designed to simplify the integration of various AI models. It offers a flexible API that enables developers to seamlessly switch between commercial models from providers including OpenAI, Anthropic, Azure OpenAI, open-source inference services like Hugging Face, Groq, and Fireworks AI, and local models running on Ollama.

By acting as a unified middleware, Portkey enables:

Seamless AI model switching across multiple providers

Efficient rate-limiting and caching to optimize API calls

Scalability for large-scale, AI-driven applications

Simplified request management for multiple AI back ends

Portkey has gained traction in AI-powered applications that require flexibility and cost efficiency when interacting with different AI models.

What problem does Portkey solve?

Integrating and managing multiple LLM providers has been a challenge for developers working on AI applications. Traditional approaches often lead to several pain points:

Vendor lock-in with a single LLM provider

Difficulty in comparing performance across different models

Lack of built-in load balancing and fail-over mechanisms

Inconsistent APIs across providers

The current landscape of LLM integration is often fragmented and inefficient. Developers face multiple challenges:

Managing authentication and API keys for multiple providers

Implementing custom load balancing logic

Ensuring consistent response formats across different models

Optimizing costs while maintaining performance

These limitations particularly impact developers, AI researchers, and companies building LLM-powered applications. Organizations seeking to leverage multiple LLM providers are constrained by the complexity of managing multiple integrations and the lack of built-in optimization features.

A closer look at Portkey AI Gateway

Portkey AI Gateway is an open-source library that simplifies LLM integration for Python developers. It provides a robust framework with a unified API that enables seamless interaction with multiple LLM providers.

At the core of Portkey’s functionality is its ability to abstract away the differences between various LLM providers. It allows developers to easily switch between models, or implement advanced features like load balancing, without changing their application code.

The project currently supports multiple LLM providers:

Anthropic

Azure OpenAI

Google

Groq

OpenAI

Portkey AI Gateway distinguishes itself through several unique features:

Unified API across providers

Built-in load balancing

Easy provider switching

Consistent response formatting

Key use cases for Portkey AI Gateway

Multi-provider integration: Portkey AI Gateway enables developers to easily integrate multiple LLM providers into their applications. For instance, an application can:

Use OpenAI’s GPT-4 for complex reasoning tasks

Leverage Groq’s llama3-70b-8192 for faster response times

Implement fallback mechanisms to ensure high availability

Load balancing and optimization: The library allows developers to implement sophisticated load balancing strategies, such as:

Distributing requests across multiple providers based on custom weights

Automatically failing over to alternative providers in case of errors

Optimizing costs by routing requests to the most cost-effective provider

Simplified development workflow: Portkey AI Gateway supports a more streamlined development process by:

Providing a consistent API across different LLM providers

Allowing easy switching between models for testing and comparison

Simplifying the management of API keys and authentication

Integrating Portkey AI Gateway

Let’s look at some code examples to illustrate Portkey’s capabilities.

Basic usage with a single provider:

from portkey_ai import Portkey
import os

client = Portkey(
provider='openai',
Authorization=os.environ['OPENAI_API_KEY']
)

response = client.chat.completions.create(
messages=[{'role': 'user', 'content': 'What's the meaning of life?'}],
model='gpt-4o-mini'
)

print(response.choices[0].message.content)

Using multiple providers:

from portkey_ai import Portkey
import os

# OpenAI client
openai_client = Portkey(
provider='openai',
Authorization=os.environ['OPENAI_API_KEY']
)

response = openai_client.chat.completions.create(
messages=[{'role': 'user', 'content': 'What's the meaning of life?'}],
model='gpt-4o-mini'
)

print('From OpenAI:')
print(response.choices[0].message.content)

# Groq client
groq_client = Portkey(
provider='groq',
Authorization=os.environ['GROQ_API_KEY']
)

response = groq_client.chat.completions.create(
messages=[{'role': 'user', 'content': 'What's the meaning of life?'}],
model='llama3-70b-8192'
)

print('From Groq:')
print(response.choices[0].message.content)

Implementing load balancing:

from portkey_ai import Portkey
import os

lb_config = {
'strategy': { 'mode': 'loadbalance' },
'targets': [{
'provider': 'openai',
'api_key': os.environ['OPENAI_API_KEY'],
'weight': 0.1
},{
'provider': 'groq',
'api_key': os.environ['GROQ_API_KEY'],
'weight': 0.9,
'override_params': {
'model': 'llama3-70b-8192'
},
}],
}

client = Portkey(config=lb_config)

response = client.chat.completions.create(
messages=[{'role': 'user', 'content': 'What's the meaning of life?'}],
model='gpt-4o-mini'
)

print(response.choices[0].message.content)

Implementing conditional routing:

from portkey_ai import Portkey
import os

openai_api_key = os.environ['OPENAI_API_KEY']
groq_api_key = os.environ['GROQ_API_KEY']

pk_config = {
'strategy': {
'mode': 'conditional',
'conditions': [
{
'query': {'metadata.user_plan': {'$eq': 'pro'}},
'then': 'openai'
},
{
'query': {'metadata.user_plan': {'$eq': 'basic'}},
'then': 'groq'
}
],
'default': 'groq'
},
'targets': [
{
'name': 'openai',
'provider': 'openai',
'api_key': openai_api_key
},
{
'name': 'groq',
'provider': 'groq',
'api_key': groq_api_key,
'override_params': {
'model': 'llama3-70b-8192'
}
}
]
}

metadata = {
'user_plan': 'pro'
}

client = Portkey(config=pk_config, metadata=metadata)

response = client.chat.completions.create(
messages=[{'role': 'user', 'content': 'What's the meaning of life?'}]
)
print(response.choices[0].message.content)

The above example uses the metadata value user_plan to determine which model should be used for the query. This is useful for SaaS providers who offer AI through a freemium plan.

Harnessing Portkey AI Gateway for LLM integration

Portkey represents a significant innovation in LLM integration. It addresses critical challenges in managing multiple providers and optimizing performance. By providing an open-source framework that enables seamless interaction with various LLM providers, the project fills a significant gap in current AI development workflows.

The project thrives on community collaboration, welcoming contributions from developers worldwide. With an active GitHub community and open issues, Portkey encourages developers to participate in expanding its capabilities. The project’s transparent development approach and open-source licensing make it accessible for both individual developers and enterprise teams.

Portkey also offers a commercial implementation of its AI gateway, providing enterprises with a robust solution for managing LLM integrations. Key features include a unified API for more than 250 LLM providers, load balancing, conditional routing, automatic retries and fallbacks, semantic caching, and multi-modal support.

Bottom line – Portkey AI Gateway

Combining ease of use, comprehensive features, and active community support, the Portkey AI Gateway stands out as a valuable tool for developers seeking to integrate multiple LLM providers into their applications. It is also available as a hosted service starting at $49 per month, with a free tier for prototyping and testing. By facilitating seamless interactions with various LLM providers, Portkey contributes to the advancement of more flexible and robust AI-powered applications.
https://www.infoworld.com/article/3835182/portkey-an-open-source-ai-gateway-for-easy-llm-orchestrati...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
jeu. 6 mars - 13:58 CET