I’m adding an example with code to explain the scenario:
This is what I want to do:
Let’s say function get_order_details
gives us order details for the given order_id
and we can attach this tool/fucntion to LLM, and now LLM can answer questions related to order details.
# Define the get_order_details function
def get_order_details(order_id):
# Mock implementation for the example
return {
"order_id": order_id,
"status": "shipped",
"items": [
{"name": "Laptop", "quantity": 1, "price": 1000},
{"name": "Mouse", "quantity": 2, "price": 20}
],
"shipping_date": "2024-10-01"
}
We can pass this function as a tool to openai client like this:
response = openai.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools, # tool for order details function (JSON)
)
Example conversation:
User: Can you tell me order detail of order_id: 123
Tool call: get_order_details
Tool output: {…}
Assistant: You order was shipped on 2024-10-10 …
Here, the response is in plain text, and I want the get the response in a fixed JSON schema (OrderDetailsResponse
) as shown below.
Assistant: {order details in JSON string}
# Define the response format using Pydantic
class(BaseModel):
order_id: str
status: str
items: list[dict]
shipping_date: str
In the structure output feature, we can provide the above schema class using the response_format
parameter to get the final answer in JSON schema instead of plain text.
Here is an example from the documentation:
from pydantic import BaseModel
from openai import OpenAI
client = OpenAI()
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[
{"role": "system", "content": "Extract the event information."},
{"role": "user", "content": "Alice and Bob are going to a science fair on Friday."},
],
response_format=CalendarEvent, # here
)
event = completion.choices[0].message.parsed
But this example does not use any use tool
and I’m not able to find any such example that uses tools
and response_format
simultaneously.
For now, my workaround is using OrderDetailsResponse
as a tool along with get_order_details
and instructing the model (system prompt) to call this OrderDetailsResponse
before generating the plain text output.
response = openai.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools, # tool for order details function and OrderDetailsResponse
)
And once LLM calls this OrderDetailsResponse
tool, I will have the arguments in JSON schema.
But this approach is a little bit unreliable as LLM needs to call the OrderDetailsResponse
at last, otherwise I will not get the response in JSON schema.
Please let me know if you need any clarification.
Thank you.