How can I use structured_output with Azure OpenAI?

I want to use structured output with Azure OpenAI.

I tried the following code, based on the code given in https://openai.com/index/introducing-structured-outputs-in-the-api/:

from pydantic import BaseModel
from openai import AzureOpenAI

class Step(BaseModel):
    explanation: str
    output: str


class MathResponse(BaseModel):
    steps: list[Step]
    final_answer: str


client = AzureOpenAI(api_key='[redacted]',
                     api_version='2024-05-01-preview',
                     azure_endpoint='[redacted]')

completion = client.beta.chat.completions.parse(
    model="gpt-4omini-2024-07-18-name",
    messages=[
        {"role": "system", "content": "You are a helpful math tutor."},
        {"role": "user", "content": "solve 8x + 31 = 2"},
    ],
    response_format=MathResponse,
)

message = completion.choices[0].message
if message.parsed:
    print(message.parsed.steps)
    print(message.parsed.final_answer)
else:
    print(message.refusal)

I get the error:

openai.BadRequestError: Error code: 400:
{
    "error": {
        "message": "Invalid parameter: response_format must be one of json_object, text.",
        "type": "invalid_request_error",
        "param": "response_format",
        "code": "None"
    }
}

How to fix it?

I ran pip install -U openai: I use openai==1.40.1 and Python 3.11.

5 Likes

it’s not available yet on Azure, but if you want to see what it’s like. you can try this:

it works for all models support function calling.

it’s not an ideal solution, I am just going to wait for the actual structured ouputs to be available.

import os
import json
from openai import AzureOpenAI
from pprint import pprint

# Initialize the Azure OpenAI client
client = AzureOpenAI(
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"), 
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),  
    api_version=os.getenv("OpenAI_API_VERSION")
)
deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_ID")

# Initial user message
messages = [
    {"role": "system", "content": "You are a helpful math tutor. that solve math questions step by step."},
    {"role": "user", "content": "solve 8x + 31 = 2"}
]

# Define the JSON schema for structured output
tools = [
    {
        "type": "function",
        "function": {
            "name": "math_response",
            "description": "help solve math questions step by step following defined format.",
            "parameters": {
                "type": "object",
                "properties": {
                    "steps": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "explanation": {"type": "string"},
                                "output": {"type": "string"}
                            },
                            "required": ["explanation", "output"]
                        }
                    },
                    "final_answer": {"type": "string"}
                },
                "required": ["steps", "final_answer"], 
            }
        }
    }
]

# First API call: Ask the model to use the function
response = client.chat.completions.create(
    model=deployment_name,
    messages=messages,
    tools=tools,
    # tool_choice="none",
)

# Process the model's response
response_message = response.choices[0].message

# Handle function calls
if response_message.tool_calls:
    for tool_call in response_message.tool_calls:
        function_args = json.loads(tool_call.function.arguments)

function_args

here is the output

{'steps': [{'explanation': 'Start with the equation 8x + 31 = 2.',
   'output': '8x + 31 = 2'},
  {'explanation': 'Subtract 31 from both sides to isolate the term with x.',
   'output': '8x = 2 - 31'},
  {'explanation': 'Simplify the right side: 2 - 31 = -29.',
   'output': '8x = -29'},
  {'explanation': 'Divide both sides by 8 to solve for x.',
   'output': 'x = -29/8'},
  {'explanation': 'Simplifying -29/8 gives the final answer.',
   'output': 'x = -3.625'}],
 'final_answer': 'x = -3.625'}

Hello there,
I am having the same issue: the parameter response_format is expected to be one of "text" and "json_object".

The functionnality should be available as a blog article by Steve Sweetman was published on August 7th: [Announcing a new OpenAI feature for developers on Azure].
However, Azure’s documentation doesn’t show the new parameter: see
[chatCompletionResponseFormat].

Does anyone have a any idea of what might be the solution?

Best regards,
twaaaaaan

PS: Sorry, I can’t include links in my post.

Follow this article for using structured output with Azure OpenAI:

Nice, an article behind a paywall as answer. Not only that, totally useless as the article mislead the reader and talks about function calling in the end, not structured output as OP asked.

Structured Output is not supported by Azure OpenAI yet as of the latest API “2024-07-01-preview”

Hi kafran,

Thank you for your response. Could you clarify where you found the information regarding the unavailability of this feature? The following Azure article seems to indicate otherwise: [Announcing a new OpenAI feature for developers on Azure].

My experience with these announcements shows that Microsoft is trying to ride the OpenAI hype without offering practical execution or being truthful at all. The last announcement was about the availability of GPT-4o Mini, but it appeared on Azure much later after OpenAI’s announcement.

About the Azure OpenAI API you can find all the specifications here, the latest being “2024-07-01-preview”: githubcom/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview

100% agree. MS announcements are BS.

For users coming across this topic, please note that as of the time I’m writing this, “Structured Outputs” is available via the gpt-4o model (2024-08-06). Unfortunately, it’s only available in 5 regions at the moment.

Official doc about it here: Azure OpenAI Service models - Azure OpenAI | Microsoft Learn