Incomplete Results Using Pydantic and Function Calling

I am using Pydantic to generate a JSON Schema that is used in conjunction with Function Calling. When I use gpt-4-1106-preview or gpt-3.5-turbo-16k to have the models summarize text, it’s likely that some values for fields in the returned JSON will be empty strings.

I am not setting max_tokens and I have tried varying sizes of prompt text in terms of tokens, but the results can, and often are, incomplete in that many values are returned as empty strings. Using params["response_format"] = { "type":"json_object" } seems to have no impact on this issue.

Any recommendations on how to handle this? Any thoughts are appreciated. :slight_smile:

The Pydantic classes look something like this:

class Title(BaseModel):
    title: str = Field(..., description="Title of the text", max_length=40)
    description: str = Field(..., description="Description of the text", max_length=100)

class Topics(BaseModel):

class Introduction(BaseModel):

class General(BaseModel):
    page1: Title
    page2: Topics
    page3: Introduction

Here’s what the function calling portion looks like:

params = {
                "temperature": 0.7,
                "top_p": 1,
                "frequency_penalty": 0,
                "presence_penalty": 0

response = openai.ChatCompletion.create(
        "name": "create_summary",
        "description": "Create a summary of the text according to the schema",
        "parameters": General.model_json_schema()
    function_call={"name": "create_summary"},