It's a Bug! It's an Error! or It is not possible to combine Custom Function with Conversations in Responses

Hello OpenAI Forum,

When I add a conversation to Responses’s request body with a custom function, I am getting an error message: No tool output found for function call call_……

If I remove the conversation, everything works.

Response request body without conversation:

{“model”: “gpt-5”, “reasoning”: {“effort”: “low”}, “text”: {“verbosity”: “low” },“tools”: [{“type”: function", “name”: “get_weather”, “description”: “Determine weather in my location”, “strict”: true, “parameters”: {“type”: “object”, “properties”: {“location”: {“type”: “string”, “description”: “The city and state e.g. San Francisco, CA” }, “unit”: {“type”: “string”, “enum”: [“c”, “f”] }}, “additionalProperties”: false, “required”: [“location”, “unit”] } } ],“tool_choice”: “auto”, “input”: [ { “content”: [{“text”: “Temperature in celsius in Rio de Janeiro”, “type”: “input_text”}], “role”: “user” }]}

Can someone help?

Thanks,

Gustavo

I had a nice conversation with OpenAI support assistant and it replied:

The assistant said:

Thank you for explaining clearly. Based on your description and available documentation, here’s what’s going on:

If you receive the error “No tool output found for function call” immediately on your first request—without the API returning a function call step—this usually means the combination of your request parameters is not supported as expected.

Key Details:

  • Using a conversation ID with your initial Responses API request basically “continues” an existing conversation rather than starting a new one.

  • If the conversation referenced by the conversation ID is empty, or not in the expected state for a function call, the API can fail in this way.

  • It appears the Responses API expects you to follow a multi-step workflow using conversation IDs, but won’t “kick off” the function calling flow in a single step if given both the conversation ID and a requirement for a function/tool call.

What you should try:

  1. Omit the conversation parameter in your initial request.

    • Let the model respond with a function call in this first message, then start using the returned conversation ID from this response for all later requests.
  2. If you must provide a conversation ID:

    • Only use it when continuing an already existing conversation that has had prior steps (at least one item in the conversation history).

Summary: You should not include the conversation ID on your very first request if you want function calling to be triggered. Instead, allow the API to return a function call (with tool call ID and arguments) before referencing conversation IDs for follow-up input.

This behavior aligns with common troubleshooting around OpenAI’s multi-turn APIs and function calling reported in the documentation and by users.

Let me know if you want an example API sequence for this flow!

You said:

What you proposed is not documented anywhere in Open AI. Responses API documentation clearly states that it is possible to combine conversation and custom functions. Please report as soon as possible the error and the misleading documentation.

The assistant said:

Thank you for your patience and for clearly restating the issue. You are correct that OpenAI’s documentation states it is possible to combine conversation IDs and custom function calls with the Responses API.

However, based on both your experience and available help documentation, the API should return a function_call on the first request, and you should not receive a “No tool output found for function call” error immediately. If that error comes on the first API response (before you have a chance to return tool output), this behavior does not match the intended flow or what’s documented for the supported workflows.

You have found a possible bug and/or documentation issue:

  • Your initial request, when including both a conversation ID and a required function/tool, does not generate the expected function_call response.

  • Instead, it returns an error about missing tool output, which is not expected based on the documentation.

I am reporting this bug and concern about potentially misleading documentation to the OpenAI team on your behalf.

Thank you for carefully documenting the request body, error, and discrepancy. This will help OpenAI investigate the Responses API’s current behavior and update documentation if needed.

Since OpenAI previously stated that the Assistant API functionalities would be completely replaced by Responses, it would be nice if Open AI could give an official position about Responses API compatibility with Conversations when custom functions are required.

can you show the case that breaks? good to see the working one but better to have the broken case too please

just add a valid conversation id.

Let me know if I am missing something but I think what’s going on is that in that “conv_123abc” conversation, the last thing the model did was to ask for a tool to be called. Now you are trying to resume the conversation with the next user input but you have not satisfied the model’s request to call a tool from before.

Its expecting to see a tool response function_call_output. I can see where its frustrating that it works if you remove the tool definition, but the problem is with the state of the conversation and not with the tool itself.

if you don;t want to call the tool, when you get that error you can just issue a dummy tool call reply turn:
{
“type”: “function_call_output”,
“call_id”: “call_abc123”,
“status”: “completed”,
“output”: “Tool call skipped (no data eetc.).”
}

Sorry if I missed something.