Hello,
I would like to know if nested pydantic models are supported by the newly structured outputs (which is super useful, thanks)
I tried the following:
class RouterEval(BaseModel):
topic: str = Field(description="the topic of the query provided by the human")
query: str = Field(description="the statement of the user")
class Topics(StrEnum):
politic="politic"
sport="sport"
other="other"
mathematics="mathematics"
class RouterEvalWithEnum(BaseModel):
topic: Topics = Field(description="the topic of the query provided by the human")
query: str = Field(description="the statement of the user")
Tests:
queries: list = [
"who won the superbowl in 2023 ?",
"who was the first president of the United States",
"what is 1+1 equal to ?",
"what is the weather in Paris ?"
]
res = []
for q in queries:
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[
{"role": "system", "content": "Classify the topic of the query between `politic`, `sport`, `mathematics` and `other`. Use `other` if not related to politic, sport, nor mathematics"},
{"role": "user", "content": q},
],
response_format=RouterEval,
temperature=0.0
)
res.append(completion.choices[0].message.parsed)
# RouterEval(topic='sport', query='who won the superbowl in 2023 ?'),
# RouterEval(topic='politic', query='who was the first president of the United States'),
# RouterEval(topic='mathematics', query='what is 1+1 equal to ?'),
# RouterEval(topic='other', query='what is the weather in Paris ?')
But when trying with the model with enum, I got the following error:
BadRequestError: Error code: 400 - {āerrorā: {āmessageā: āInvalid schema for response_format āRouterEvalWithEnumā: In context=(āpropertiesā, ātopicā), āallOfā is not permittedā, ātypeā: āinvalid_request_errorā, āparamā: āresponse_formatā, ācodeā: None}}
My env.
- Python 3.11.8 (main, Mar 24 2024, 00:21:05) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin
- openai 1.40.1
- pydantic 2.8.2
- pydantic_core 2.20.1
Regards,
Jonathan