Using Gemini with OpenAI Agents SDK

Hello everyone,

I’m trying to use Gemini API with OpenAI’s Agents SDK TS but it doesn’t work. Maybe you know a way to make it work?

import { Agent, Runner } from '@openai/agents';
import { aisdk } from '@openai/agents-extensions';
import { google } from '@ai-sdk/google';

 new Agent<AgentContextStorefront, typeof AgentsResponseStorefront>({
        name: cfg.name,
        instructions: promptWithHandoffInstructionsAndPersonality(cfg.instructions, multiAgentSystem.systemInstructions, context),
        model: aisdk(google('gemini-2.5-flash')),
        tools: usedTools,
        handoffs: [],              
        handoffDescription: cfg.handoffDescription,
        outputType: AgentsResponseStorefront,
        toolUseBehavior: "run_llm_again",
        modelSettings: {
          toolChoice: cfg.modelSettings?.toolChoice !== undefined ? cfg.modelSettings?.toolChoice : 'auto',
          parallelToolCalls: cfg.modelSettings?.parallelToolCalls !== undefined ? cfg.modelSettings?.parallelToolCalls : false,

        }
      });

When I run, I get:

AI_APICallError: Function calling with a response mime type: 'application/json' is unsupported
    at <anonymous> (/Users/bakikucukcakiroglu/Workspace/lookfor/backend/node_modules/@ai-sdk/provider-utils/src/response-handler.ts:59:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at postToApi (/Users/bakikucukcakiroglu/Workspace/lookfor/backend/node_modules/@ai-sdk/provider-utils/src/post-to-api.ts:111:28)
    at GoogleGenerativeAILanguageModel.doStream (/Users/bakikucukcakiroglu/Workspace/lookfor/backend/node_modules/@ai-sdk/google/src/google-generative-ai-language-model.ts:312:50)
    at AiSdkModel.getStreamedResponse (/Users/bakikucukcakiroglu/Workspace/lookfor/backend/node_modules/@openai/agents-extensions/src/aiSdk.ts:615:26)
    at Runner.#runStreamLoop (/Users/bakikucukcakiroglu/Workspace/lookfor/backend/node_modules/@openai/agents-core/src/run.ts:731:28)

I don’t see where you are specifying the different base URL?

Looks like you can specify it in your environment variables:

1 Like

I don’t really know how the problem is related to baseURL but here you should be able to see where it is set.

How? This is not your code or environment it’s just documentation. How does that prove you’ve followed it?

Of all the things I might see, your environment variables are (rightly) not amongst them :slight_smile:

Did you include them in the OP? No.

So how did you expect me to know what you included? For all I knew you hadn’t set it.

You reported this problem, and this is one of the obvious potential issues.

Below is the code that worked for me

import {
  Agent,
  Runner,
  setTracingDisabled,
  tool,
  OpenAIProvider,
  setDefaultOpenAIClient,
  setOpenAIAPI,
  setDefaultOpenAIKey,
} from "@openai/agents";
import OpenAI from "openai";
import { config } from "dotenv";
import { z } from "zod";

// Load environment variables from .env file
config();

if (!process.env.BASE_URL || !process.env.API_KEY || !process.env.MODEL_NAME) {
  throw new Error(
    "Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
  );
}

/**
 * This example uses a custom provider for all requests by default. We do three things:
 * 1. Create a custom client.
 * 2. Set it as the default OpenAI client, and don't use it for tracing.
 * 3. Set the default API as Chat Completions, as most LLM providers don't yet support Responses API.
 *
 * Note that in this example, we do not set up tracing, under the assumption that you don't have an API key
 * from platform.openai.com. If you do have one, you can set the `OPENAI_API_KEY` env var for tracing.
 */

// Create a custom OpenAI client and provider
const openaiClient = new OpenAI({
  apiKey: process.env.API_KEY,
  baseURL: process.env.BASE_URL,
});
const modelProvider = new OpenAIProvider({
  openAIClient: openaiClient,
});
setDefaultOpenAIClient(openaiClient); // Pass the OpenAI client instance
setOpenAIAPI("chat_completions");
setTracingDisabled(true);

// Tool definition
const getWeather = tool({
  name: "get_weather",
  description: "Get the weather for a city.",
  parameters: z.object({
    city: z.string().describe("The city to get weather for"),
  }),
  async execute(input) {
    // input: { city: string }
    console.log(`[debug] getting weather for ${input.city}`);
    return `The weather in ${input.city} is sunny.`;
  },
});

async function main() {
  const agent = new Agent({
    name: "Assistant",
    instructions:
      "You only respond in short sentences. Mention temperature in Farenheit and wind speed as well.",
    model: process.env.MODEL_NAME,
    tools: [getWeather],
  });

  const runner = new Runner({ modelProvider });
  const result = await runner.run(
    agent,
    "What's the weather in Washington D.C?"
  );
  console.log(result.finalOutput);
}

main();

.env file

BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai/
API_KEY=<YourGemini-Key>
MODEL_NAME=gemini-2.5-flash
1 Like