Open AI Agent SDK and Azure Hosted Reasoning Model

I’ve used the OpenAI Agent SDK to create an agent that runs with Azure Foundry-hosted models like GPT-4o, and everything works fine. However, I’m now looking to modify the setup to use one of the reasoning models available on Azure Foundry, such as o1.

Despite explicitly setting the model property to o1 as you can see in code below, my agent still routes requests to GPT-4o. I couldn’t find any documentation on how to properly configure it for o1.

Here’s my current code - can you help me understand what I might be missing?

# This function is used to get the Azure OpenAI client.
def get_azure_openai_async_client()-> AsyncAzureOpenAI:
    return AsyncAzureOpenAI(
        api_key=AZURE_OPENAI_API_KEY,
        azure_endpoint=AZURE_OPENAI_ENDPOINT,
        api_version=AZURE_OPENAI_API_VERSION,
        azure_deployment="gpt-4o"
    )

# this function should return the object of model type
def get_reasoning_model(client: AsyncAzureOpenAI):
    return OpenAIChatCompletionsModel(
        model="o1",
        openai_client=client
    )

# Create custom OpenAI client for Azure
client = get_azure_openai_async_client()

# create agent
agent = Agent(
 name = "Hello World Agent",
 instruction=commenting for brevity .."
 model = get_reasoning_model(client)
)

Questions:

  • Does the Chat Completion API support reasoning models, or are they only compatible with the Responses API?
  • If Azure provides access to the Responses API, how can I integrate it with reasoning models using the Agent SDK?
1 Like

Hi,

The completions API provides access to o3 and o4 reasoning models. See: https://platform.openai.com/docs/api-reference/completions/create#completions-create-model

For help with Azure products, you will need to see the relevant Microsoft OpenAI services support pages as those are not covered here.