Token overflow with code interpreter

Hello, I’ve been using the code interpreter with the assistant API and gpt4o, and I’ve encountered an issue. When the input reaches a certain number of tokens, ChatGPT drastically adds nonexistent tokens.

The workflow is as follows:

1.- I used an assistant to create graphs that has instructions about a CSV file with fake data, then I provide them in the prompt to be able to make queries based on those data.

2.- I provide a certain amount of information, which is 85 rows, totaling 19239 tokens, including output and instructions, in this case I say ‘make me a chart with these data’ saying it about a topic.

This is a simple print to count the quantity of logs (85).

Quantity of completion, prompt and total tokens (4108, 15131 and 19239). It is below the context length.

The assistant works well and generates the graph, but if I add one more piece of data I get this:

This is only an easy log.

How you can see the token usage jump up drastically, but why?, (28494, 127829 and 156778 tokens). This is a higher value what we can expect and I get an error 'cause it is out of the context length.

if I am wrong or happen something that I don’t know, please let me know.

Thank you.