Provides the same proxy OpenAI API interface for different LLM models, and supports deployment to any Edge Runtime environment.
Supported models
- OpenAI
- Anthropic
- Google Vertex Anthropic
- Google Gemini
- DeepSeek
Deployment
Environment variables
-
API_KEY
: Proxy API Key, required when calling the proxy API -
OpenAI: Supports OpenAI models, e.g.
gpt-4o-mini
OPENAI_API_KEY
: OpenAI API Key
-
VertexAI Anthropic: Supports Anthropic models on Google Vertex AI, e.g.
claude-3-5-sonnet@20240620
VERTEX_ANTROPIC_GOOGLE_SA_CLIENT_EMAIL
: Google Cloud Service Account EmailVERTEX_ANTROPIC_GOOGLE_SA_PRIVATE_KEY
: Google Cloud Service Account Private KeyVERTEX_ANTROPIC_REGION
: Google Vertex AI Anthropic RegionVERTEX_ANTROPIC_PROJECTID
: Google Vertex AI Anthropic Project ID
-
Anthropic: Supports Anthropic models, e.g.
claude-3-5-sonnet-20240620
ANTROPIC_API_KEY
: Anthropic API Key
-
Google Gemini: Supports Google Gemini models, e.g.
gemini-1.5-flash
GOOGLE_GEN_AI_API_KEY
: Google Gemini API Key
Usage
Once deployed successfully, you can call different models through OpenAI’s API interface.
For example, calling OpenAI’s API interface:
curl http://localhost:8787/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Hello, world!"
}
]
}'
Or calling Anthropic’s API interface:
curl http://localhost:8787/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"model": "claude-3-5-sonnet-20240620",
"messages": [
{
"role": "user",
"content": "Hello, world!"
}
]
}'
And it can be used in OpenAI’s official SDK, for example:
const openai = new OpenAI({
baseURL: 'http://localhost:8787/v1',
apiKey: '$API_KEY',
})
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello, world!' }],
})
console.log(response)
See GitHub: GitHub - rxliuli/openai-api-proxy: Provides the same proxy OpenAI API interface for different LLM models, and supports deployment to any Edge Runtime environment.
I’m not sure if it’s appropriate to post here. If it’s not, I will delete it.