GPT-5: 400 Unsupported value: 'messages[6].role' does not support 'function' with this model

It appears that the “function” role is not supported on GPT-5. I tried changing it to “tool” but it it gives me this error:
400 Invalid parameter: messages with role ‘tool’ cannot be used when ‘functions’ are present. Please use ‘tools’ instead of ‘functions’.

Does this mean that the legacy “function” notation is not supported at all with GPT-5?

1 Like

To be honest, I think this is very very badly documented and totally confusing - if not just wrong documented. Excuse my slight rage-tone, but I also spent a lot of time on this bug and the documentation is totally misleading.

The solution seems to be:

{
"tool_call_id": tool_call['id'],
"role": "tool",
"type": "function_tool_output",
"name": function_name,
"content": content
}

If this is not correct and somehow still wrong (even though working) pleaaaase fix the documentation.

Just to back my confusion with quotes:

Using GPT-5 says:

With GPT-5, we’re introducing a new capability called custom tools, which lets models send any raw text as tool call input but still constrain outputs if desired.

Afterwards you link to the Function calling docs. Directly in the beginning the docs state: Function calling (also known as tool calling)” implying function calling and tool calling seems to be the same. However, there is also the “Functions vs tools” accordion, implying… they are not the same? There it states:

In addition to function tools, there are custom tools (described in this guide) that work with free text inputs and outputs.

So Function Calling docs itself say, that the page is about custom tools (which seems to be a synonym for function calling / tool calling / tools simultanously).

The Function calling docs later show an example in code how to append the response of a tool call to the chat history / context via:

# 4. Provide function call results to the model
input_list.append({
    "type": "function_call_output",
    "call_id": function_call.call_id,
    "output": json.dumps(result),
})

To be fair, I`ve overlook that the example refers to the Response API not the ChatCompletion API I’m working with. Response API is working, ChatCompletion API works different. If you do using gpt-5-mini you get the error :

“Missing required parameter: 'messages[25].role'.”

Nothing is stated about the “role” parameter in the docs. At least not where a dev would search for (in approx. 1 hour). One can guess a role. Somehow I managed to assume, that “type” should be “role”. So I set "role": "function_call_output". Unfortunately, this leads to the following error:

"Invalid value: \'function_tool_output\'. Supported values are: \'system\', \'assistant\', \'user\', \'function\', \'tool\', and \'developer\'."

Ahhh… clearly… "role": "function" it should be. BUT hell no, this leads to the error:

Unsupported value: \'messages[33].role\' does not support \'function\' with this model.

In the end, after randomly try and error I found out that the above solution is working.

1 Like

The essential problem: OpenAI has completely damaged and ruined the API documentation for the Chat Completions endpoint. There used to be a toggle on each page that was easy to miss, but that is gone. The responses endpoint can only be called “foisted on you”.

What Documentation has: This “input” parameter is only used by Responses

# 3. Execute the function logic for get_horoscope
result = {"horoscope": get_horoscope(function_call_arguments["sign"])}

# 4. Provide function call results to the model
input_list.append({
    "type": "function_call_output",
    "call_id": function_call.call_id,
    "output": json.dumps(result),
})

Then, OpenAI treats how functions work internally like some kind of secret, not disclosing that to the AI, there are only named “tools” from the start, and that “functions” is one type of tool where you can place your own descriptions of multiple code paths your app can respond with, and all the other “tools” are for internal use only.


Here is a complete demonstration, with a foundation written in 2023 with the introduction of “tools” as a parameter replacing “functions”. “tools” now is a container for functions, but with nothing else you can send on Chat Completions. You can either call tools with a question that stimulates parallel tool calls, or run with a built-in return demonstration.

Copy this into a code editor IDE to get a better view. Run in a Jupyter notebook or REPL environment, and all the variables will be globals you can inspect after it runs.

"""
demo: assembling a multi-step function-call
walkthrough into one runnable Python file.

This file shows how to:
 - declare tool specs (functions the model may call)
 - prepare the chat messages (system/user)
 - see how the AI emits a tool call to a function
 - uses *parallel tool calls*
 - optionally replay a prior assistant message
   and the subsequent tool returns (controlled by the
   global RETURN_A_TOOL_RESPONSE flag) and see a final answer.
 - perform a chat completion call that uses
   function/tool parameters and message construction.
 - demonstrate how to parse a "with_raw_response"
   response and extract headers, choices, usage, and
   tool_calls. (headers have rate limits, etc)
"""

import json
from openai import OpenAI

# Toggle this to include prior assistant/tool
# responses in the outgoing messages.
RETURN_A_TOOL_RESPONSE = False

# Initialize the official client instance.
client = OpenAI()  # automatic OPENAI_API_KEY retrieval from env var


"""
Tool specification section:
We're enumerating tools (with functions) the assistant may
call. Each entry follows the function-calling JSON
schema: name, description, and JSON schema for
parameters. Description helps the model decide when and how
to use the tool.
"""
toolspec = []
toolspec.extend(
    [
        {
            "type": "function",
            "function": {
                "name": "get_weather_forecast",
                "description": (
                    "Get weather forecast. AI can make multiple"
                    " tool calls in one response."
                ),
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "City and state, e.g. 'Seattle, WA'.",
                        },
                        "format": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": (
                                "Temperature unit to return. 'celsius' or"
                                " 'fahrenheit'."
                            ),
                        },
                        "time_period": {
                            "type": "number",
                            "description": "Length in days or portions.",
                        },
                    },
                    # required keys must match properties above
                    "required": ["location", "format", "time_period"],
                },
            },
        }
    ]
)


"""
Base messages construction:
We always include a 'system' instruction and a
'user' request. This forms the starting point for the
assistant's reasoning about whether to call functions.
"""
base_messages = [
    {
        "role": "system",
        "content": "You are a helpful AI assistant.",
    },
    {
        "role": "user",
        "content": (
            "How hot will it be today in Seattle? And in Miami?"
            " Use multi-tool to get both at the same time."
        ),
    },
]

# Assemble the parameter dictionary template with model.
params = {
    "model": "gpt-5-mini",
    # 'tools' is the function schema list that the model
    # may call during completion (function-calling).
    "tools": toolspec,
    # We'll provide 'messages' below after conditional
    # augmentation depending on RETURN_A_TOOL_RESPONSE.
    "messages": None,  # Added at end
    "max_completion_tokens": 5000,  # must be big
    "reasoning_effort": "low",
}


"""
Optional replay of assistant and tool outputs:
This shows also adding a past assistant function call
and the return result of parallel functions. Also note
the pattern of the AI also writing to the user. If the
global flag is False, we send only the user prompt.
If True, we append the assistant "tool_calls" object
and two 'tool' role messages representing tool returns.
"""
# Start with a shallow copy of base messages.
outgoing_messages = list(base_messages)

if RETURN_A_TOOL_RESPONSE:
    # The assistant previously emitted a message that
    # invoked two function calls (multi-tool).
    assistant_emitted = {
        "role": "assistant",
        "content": "Let me look up the weather in those cities "
                   "for you...",
        # 'tool_calls' is a demo field used to show the
        # assistant's function call history.
        "tool_calls": [
            {
                "id": "call_rygjilssMBx8JQGUgEo7QqeY",
                "type": "function",
                "function": {
                    "name": "get_weather_forecast",
                    # arguments serialized as a JSON string
                    "arguments": (
                        "{\"location\": \"Seattle\", "
                        "\"format\": \"fahrenheit\", "
                        "\"time_period\": 1}"
                    ),
                },
            },
            {
                "id": "call_pI6vxWtSMU5puVBHNm5nJhw3",
                "type": "function",
                "function": {
                    "name": "get_weather_forecast",
                    "arguments": (
                        "{\"location\": \"Miami\", "
                        "\"format\": \"fahrenheit\", "
                        "\"time_period\": 1}"
                    ),
                },
            },
        ],
    }
    outgoing_messages.append(assistant_emitted)

    # Simulated tool return messages. Each has role 'tool'
    # and a 'tool_call_id' linking it to the call above.
    tool_return_seattle = {
        "role": "tool",
        "tool_call_id": "call_rygjilssMBx8JQGUgEo7QqeY",
        "content": (
            "Seattle 2022-12-15 forecast: high 62, low 42, "
            "partly cloudy\n"
        ),
    }
    tool_return_miami = {
        "role": "tool",
        "tool_call_id": "call_pI6vxWtSMU5puVBHNm5nJhw3",
        "content": "Miami 2022-12-15 forecast: high 77, low 66, sunny\n",
    }
    outgoing_messages.append(tool_return_seattle)
    outgoing_messages.append(tool_return_miami)

# Finalize the messages into params for the API call.
params["messages"] = outgoing_messages


"""
Execute the chat completion call:
We use SDK's 'with_raw_response' to show how to extract
response headers alongside the JSON payload.
Wrapped in try/except to handle network or API errors.
Streaming and iterating over stream chunks not in demo.
"""
c = None
try:
    c = client.chat.completions.with_raw_response.create(
        **params
    )
except Exception as e:
    print(f"Error: {e}")

# If we received a raw response, parse out common fields.
if c:
    # Extract headers into globals for demo purposes.
    try:
        headers_dict = c.headers.items().mapping.copy()
    except Exception:
        # Fallback if headers structure differs.
        try:
            headers_dict = dict(c.headers.items())
        except Exception:
            headers_dict = {}

    for key, value in headers_dict.items():
        variable_name = f'headers_{key.replace("-", "_")}'
        globals()[variable_name] = value

    # This line demonstrates that headers_* globals exist.
    # (It will raise if the header key is missing.)
    # remains = headers_x_ratelimit_remaining_tokens

    # Parse the JSON body from the raw response bytes.
    try:
        api_return_dict = json.loads(c.content.decode())
    except Exception:
        api_return_dict = {}

    # Extract typical fields demonstratively (if present).
    api_choice = {}
    if api_return_dict.get("choices"):
        api_choice = api_return_dict.get("choices")[0]

    api_finish_str = api_choice.get("finish_reason")
    usage_dict = api_return_dict.get("usage")
    api_message_dict = api_choice.get("message", {})
    api_message_str = api_message_dict.get("content")
    api_tools_list = api_message_dict.get("tool_calls")

    # Print the assistant's message if present.
    if api_message_str:
        print(api_message_str)

    # If the model included 'tool_calls', pretty-print them.
    if api_tools_list:
        for tool_item in api_tools_list:
            print(json.dumps(tool_item, indent=2))

"""
Example AI output (for demonstration):
Here are the weather forecasts for today:
- Seattle: High 62°F, Low 42°F, Partly Cloudy
- Miami: High 77°F, Low 66°F, Sunny
"""

2 Likes

Absolutely.

This is really bad.

Please don’t turn OpenAI into Apple! :face_vomiting:

We do not want exclusively proprietary end points.

The toggle on the documentation was really thoughtful and useful and I have no issue with Responses being the default (if you must!)

I really hope marketing dept isn’t carrying sway here!

1 Like

Thanks for the detailed response. Will circle back to this and give it another go with your solution. Agree that the docs are totally confusing.