Content is required property error 400

I pasted the exact documentation code, even tried to fix it, tried to look up answers, tried ChatGPT, and have yet to find a single fix or way to solve my issue.

The code is on function calling via the new api, it says it requires content, no clue where because the documentation says nothing about this. Even if I fix the content issue, the model still refuses to send a tool choice with its response so it will always result in an error, at least I think that’s what’s happening.

Either way I have no idea how to fix it, no idea what’s wrong, and no idea why nothing is mentioned in the documentation anywhere about this.

Someone please help

1 Like

It will help us check your problem if you can provide us with your code or even just a snippet of how you call the API.

1 Like

Here you go:

import json
import os

api_key = "sk-..."  # hidden for reply
openai.api_key = api_key

# Dummy function to simulate getting weather data
def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    # Dummy response simulating an API call
    dummy_responses = {
        "tokyo": {"location": "Tokyo", "temperature": "10", "unit": "celsius"},
        "san francisco": {"location": "San Francisco", "temperature": "72", "unit": "fahrenheit"},
        "paris": {"location": "Paris", "temperature": "22", "unit": "celsius"}
    }
    response = dummy_responses.get(location.lower(), {"location": location, "temperature": "unknown"})
    return json.dumps(response)

# Function to run a conversation simulation
def run_conversation():
    # Step 1: Send the user query and the available functions to the model
    messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}]
    
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g., San Francisco, CA"
                        },
                        "unit": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"]
                        }
                    },
                    "required": ["location"]
                }
            }
        }
    ]
    
    response = openai.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=tools,
        tool_choice="auto"
    )
    
    # Get the tool calls from the response
    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls

    # Process the tool calls, if any
    if tool_calls:
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = globals().get(function_name)  # Get the function from the global scope
            
            # Ensure the function exists
            if function_to_call:
                function_args = json.loads(tool_call.function.arguments)
                
                # Call the function and get the response
                function_response_json = function_to_call(**function_args)
                function_response = json.loads(function_response_json)  # Convert JSON string to a Python dictionary

                    # Append the function response to the messages list as a tool response
                messages.append({
                     "tool_call_id": tool_call.id,
                      "role": "tool",
                     "name": function_name,
                    "content": str(function_response)  # Append the Python dictionary here
                })
            else:
                print(f"No function found for the name {function_name}")
    else:
        print("No tool calls made by the model.")
    
    # Step 4: (Optional) Make another API call to continue the conversation
    # Depending on your application, you might want to send messages back to the model to get a user-facing message
    # For example:
    response = openai.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=tools,
        tool_choice="none"  # Set to "none" to get a user-facing message without making tool calls
    )

    return json.dumps(messages, indent=2)

print(run_conversation())```

You need to put back the first response of the function calling to the messages variable. I think you are missing that in your code.

messages.append(response_message)

I added that line, I still get the error:

raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "'content' is a required property - 'messages.1'", 'type': 'invalid_request_error', 'param': None, 'code': None}}

The line it fails at is the first line of this:

response = openai.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=tools,
        tool_choice="none"  # Set to "none" to get a user-facing message without making tool calls
    )

    return json.dumps(messages, indent=2)

print(run_conversation())
1 Like

Yes I am seeing the same error. I pasted the example code from this section: OpenAI Platform. And it keeps giving me this 400 error complaining content is required. I would guess there’s something wrong in the format of the example.

I noticed this is how it put back tool. call results in this example:

{
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }

which is different from how we should do it with the assistant api:

tool_outputs=[
      {
        "tool_call_id": call_ids[0],
        "output": "22C",
      },
      {
        "tool_call_id": call_ids[1],
        "output": "LA",
      },
    ]

Maybe this is the problem?

I’m not using the assistants api, but I did figure it out, OpenAI’s documentation is wrong, changing “role” from “tool” to “function” fixes it, plus I believe there was several other problems that I fixed, but the one that stopped the errors was changing tool to function.

So instead of
‘{
“tool_call_id”: tool_call.id,
“role”: “tool”,
“name”: function_name,
“content”: function_response,
}’

It would be
‘{
“tool_call_id”: tool_call.id,
“role”: “function”,
“name”: function_name,
“content”: function_response,
}’

Could you please share what other fixes you had? I still have the same error after changing tool to function.

Yes I can. Here is the code:

import openai  
import json  

api_key = "API-KEY-HERE"  
openai.api_key = api_key

def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    if "tokyo" in location.lower():
        return json.dumps({"location": "Tokyo", "temperature": "10", "unit": "celsius"})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "72", "unit": "fahrenheit"})
    elif "paris" in location.lower():
        return json.dumps({"location": "Paris", "temperature": "22", "unit": "celsius"})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})

def run_conversation():
    messages = [{
        "role": "user",
        "content": "What's the weather in Tokyo, San Francisco, and Paris. Provide the temps for Tokyo in celsius, San Francisco in fahrenheit, and Paris in celsius."
    }]
    
    
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g., San Francisco, CA"
                        },
                        "unit": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"]
                        }
                    },
                    "required": ["location", "unit"]
                }
            }
        }
    ]
    
    response = openai.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=tools,
        tool_choice="auto"
    )
    
    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls
    
    if tool_calls:
        available_functions = {
            "get_current_weather": get_current_weather,
        }  
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = globals().get(function_name)
            
            if function_to_call:
                function_to_call = available_functions[function_name]
                function_args = json.loads(tool_call.function.arguments)
                
                function_response = function_to_call(
                location=function_args.get("location"),
                unit=function_args.get("unit"),
            )
                messages.append({
                      "role": "assistant",
                      "content": str(response),
                })
                messages.append({
                      "tool_call_id": tool_call.id,
                      "role": "function",
                      "name": function_name,
                      "content": function_response,
                })
            else:
                print(f"No function found for the name {function_name}")
    else:
        print("No tool calls made by the model.")
    
    response = openai.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=tools,
        tool_choice="none"
    )
    messages.append({
        "role": "assistant",
        "content": str(response),
    })
    
    print(messages)
    return json.dumps(messages, indent=2)

print(run_conversation())

Also they changed what the function response must return with, now you have to return the response in JSON format as it is in the code I provided.

1 Like

So the solution is to duplicate the assistant message for every tool?

This is the same before every tool call you loop through, right?

messages.append({
                      "role": "assistant",
                      "content": str(response),
                })

Yes I duplicate it because the API requests that the assistant role message needs to be before every tool response role message in the conversation history.

It wants this rough format for tool calls, exclude system message after the first message is appended:

System
User
Assistant
Tool
Assistant

With no tool call it’s as it normally is:

System
User
Assistant

400 are the mostly website server errors. so you should open you cpannel to check errors.

Yes, also discovered the documentation is wrong.

because messages.append(response_message) returns an object and not like a JSON object in the form of role and content.
so this obviously throws an error.

I am able to get my code to call the function but yes responding after always throws an error:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid type for 'messages[4].content[0]': expected an object, but got a string instead.", 'type': 'invalid_request_error', 'param': 'messages[4].content[0]', 'code': 'invalid_type'}}

And this is messages[4] which is after my function call:

{'tool_call_id': 'call_hhKpjAtgSxgfWgh1BK1Jcizr', 'role': 'tool', 'name': 'fetch_listing_data', 'content': {'foo': 'bar'}}]

Funny cause it’s a JSON string and I don’t know what they want in terms of an object.