Advice on Assistants tool_outputs character limit

Hi all! New here, so apologies if I don’t ask this correctly.

We’ve discovered the character limit in the Assistants API tool_outputs. It would be ace to add this to docs to save many confused evenings! :slightly_smiling_face:

Quick ask for the OpenAI team - is it worth jumping through hoops to create an iterative process to sidestep this, or would it be smarter to do our best with the limit in place, with the expectation that it will increase in the not-too-distant future?

The use case is straightforward - we return JSON in the tool_outputs and 32,000 character limit (not tokens) is obviously small compared to model context window.

Thanks all!

John

Curious about your use case. You might consider adding the larger data chunks as files in the thread instead of function return. (You can add them to the thread from the function call) ?

No matter the future changes (‘640k should be enough’) - I believe this will continue to be a concern, even as we move on from 32k.
I have been playing with Gemini 1.5 which boast the 1million tokens and it is still inherently very lazy with going through my 64k document while boasting on the bottom of the screen about being only a 18k of 1 million tokens.

Happy to think along if you share a bit more!

1 Like

I think that @jlvanhulst’s idea of adding larger data chunks as files is a good one. Two additional options is to add pagination, or summarization functionality into your tool.

First, and I’m not sure what kind of data you are returning with your function, but if it can be chunked in a useful way, you could add a page parameter into your function and explain to the Assistant that it should iterate through the pages to get the full results.

Second, you could get the full output, and send it through the LLM and ask it summarize to no more than 32,000 characters, and if you know at the time, give it hints of what type of data to emphasize and de-emphasize in the summary.

1 Like

Thanks both! Both sound like great suggestions. Given that the Assistants API is in beta, and my spidey-senses tell me a release is coming imminently (fingers crossed!), I’m trying to avoid coding around this if we’re only 2 weeks away from the limit being maybe twice as large if you know what I mean?

We can all dream! Why do you think it would end up being more?

I guess because I can’t see any rational reason for the current limit - I suspect it’s just legacy from when the context size was much smaller.