Structured Outputs for function calling can be enabled by adding single key strict: true
to the function’s json schema.
Can you point to the problematic part in the docs?
Structured Outputs for function calling can be enabled by adding single key strict: true
to the function’s json schema.
Can you point to the problematic part in the docs?
This is NOT true. I tried, and it does not work.
You need to add strict: true to the tool schema, not the function schema. If you look at the doc and the examples, they are different.
— look at the Tools section:
click on show properties, then click on show properties on function, you can see strict: true is under function’s schema, along with parameters, description.
But this is not the case, I tried, it won’t work.
Then you can go to your intro cookbook:
look at the examples of production_search function,
I will post it here. You can clearly see strict: true is outside of function, it’s at the same level of function. It’s part of Tool schema, not the function schema.
And this is the one that worked.
product_search_function = {
"type": "function",
"function": {
"name": "product_search",
"description": "Search for a match in the product database",
"parameters": {
"type": "object",
"properties": {
"category": {
"type": "string",
"description": "The broad category of the product",
"enum": ["shoes", "jackets", "tops", "bottoms"]
},
"subcategory": {
"type": "string",
"description": "The sub category of the product, within the broader category",
},
"color": {
"type": "string",
"description": "The color of the product",
},
},
"required": ["category", "subcategory", "color"],
"additionalProperties": False,
}
},
"strict": True
}
Working fine on my end.
Here’s my code:
import json
from openai import OpenAI
client = OpenAI()
def allowEntry(passphrase: str):
"""Returns True if the passphrase is correct, otherwise False."""
correct_passphrase = "OpenSesame123"
if passphrase == correct_passphrase:
return "True"
else:
return "false"
def run_conversation():
# Step 1: send the conversation and available functions to the model
tools = [
{
"type": "function",
"function": {
"name": "allowEntry",
"strict": True,
"parameters": {
"type": "object",
"properties": {
"passphrase": {
"type": "string",
"description": "Passphrase to allow entry.",
},
},
"required": ["passphrase"],
"additionalProperties": False,
},
"description": "Allows entry if the passphrase is correct, and returns True if allowed.",
},
}
]
conversation = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are sentry GPT. You watch the door and only allow entry to authorised humans with the correct passphrase.",
}
],
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "My name is Sukhman. May I come in? OpenSesame123",
},
],
},
]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=conversation,
tools=tools,
tool_choice="auto", # auto is default, but we'll be explicit
)
print(response)
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
# Step 2: check if the model wanted to call a function
if tool_calls:
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
available_functions = {
"allowEntry": allowEntry,
} # only one function in this example, but you can have multiple
conversation.append(
response_message
) # extend conversation with assistant's reply
# Step 4: send the info for each function call and function response to the model
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
function_args = json.loads(tool_call.function.arguments)
function_response = function_to_call(
passphrase=function_args.get("passphrase"),
)
conversation.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
) # extend conversation with function response
second_response = client.chat.completions.create(
model="gpt-4o-mini",
messages=conversation,
) # get a new response from the model where it can see the function response
return second_response
print(run_conversation())
Then why the examples says the other way?
My experience is: Once I followed the example, things start to work, didn’t change the schema back to try again.
Are you saying strict: true could anywhere? On the function or on the tools?
I would go back to try again, but the inconsistency says itself: Either there is a bug in the API reference DOC, or a bug in the examples in the cookbook.
Can you please share which examples say the other way?
It’s in my previous post!
“go to your intro cookbook”, I can’t post link here. So I can only copy the code here. I removed a “t” from the link. Go to my previous post, the code is from your cook book, not mine.
ttps://cookbook.openai.com/examples/structured_outputs_intro