Undocumented truncation of function tool call submission output

Is there any documented reason why the function tool call submission output not entirely available to the assistant?

We have a function that the output is a decently-sized JSON object. When the output is submitted to a thread run, the API responds as OK. But, when you ask the assistant about this object that it received, it is clear that the assistant has only seen a truncated portion of the object.

Is this a known issue? If so, is is documented anywhere?

1 Like

The messages themselves are truncated at 32768 characters.

It seems possible the developer placed such a restriction on tool inputs also, not actually having a token-count mechanism or information about the AI model used at that code point.

You can see if this is the limitation by putting a “needle in a haystack”, where the start of a return is irrelevant documentation and the information that clearly fulfills a user request is after that character count. Then repeat with the answer fully contained in that character budget.

You would need the 125k context length of gpt-4-turbo-preview model name in order to accept such a length.


I’m seeing truncation on code_interpreter outputs at 20,000 characters, but I’d love to know if anyone sees it documented. I also can’t find any documentation that says messages are limited to 32,768 characters, I just see that system instructions are limited to 32,768 characters in the api reference

I see in another issue that it may have been a temporary issue.

https ://community.openai.com/t/fixed-assistants-api-seems-to-have-new-limit-of-20000-character-tool-output/625976