Do threads get more expensive over time?

If I keep the same thread id, the context grows over time, right? So, does each subsequent prompt to that thread use an ever-increasing number of tokens? My prompts seem to be oddly expensive: I have a code-interpreter assistant that I prompt with ~1300 tokens. Then it generates a file of around 500 tokens. Total should be around 1-2 cents. But then I see my usage jump up 30-40 cents. I am deliberately using the same thread so that I only pay the $.03 code interpreter session fee once (per hour), but maybe I should reset so I’m not dragging along all these historical tokens…?

For context, I don’t really need a historical context - I’m asking for a similar task each time but parameterize differently, I’m not really trying to build out a conversation.



Not only are you paying for the entire history each time, you’re also paying for any inserted contexts with retrieval, etc.

Have you considered using chat completions instead?

1 Like

I need to download json data and it looks like only assistants can create files, right? What I want is for a one-off chat to create a file.

Consider: Anything that assistants is doing, when it runs threads against the same AI models as are on chat completions, you can also do. It just takes programming on your part.

For example, the Python notebook with persistent storage, “code interpreter”, is completely accessed by tool calls, with a final printout of values the AI includes in code returned to the AI. The AI writes code, and that python code it writes interacts with or produces files in the mount point of the Python environment (plus an “annotation” feature the AI can write in its responses).

If the “file” is simple text, you can get and employ AI language directly that fits your purpose. However it is the python code that can produce “an image with a bar graph”, or “re-sorted CSV with calculations”, which is a result harder for the AI to get back as a tool return and answer to the user. You thus would write similar mechanisms where the AI’s Python execution returns a success statement, writes files created in a “linky” manner, and this can be a file presented for download in the user interface from the stateful data store where your python execution environment session took place.

Certainly a large bit of coding and environment that has already been done for you by “assistants”. While assistants is general-purpose, you can go beyond its capabilities with your specialization and coding imagination.