Context tokens cost too much

context tokens cost too much.

Is it correct that context tokens means output in the usage manual?
In a recent usage, I noticed that context tokens are more than 100 times more expensive than generated tokens.

Am I doing something wrong with my usage?
Any suggestions on how to reduce context tokens?


Context tokens count as input tokens. Input tokens are generally less expensive than output tokens. You can refer to the exact pricing per model here (see also the screenshot provided).

Difficult to provide suggestions for reducing your context token volume without knowing more about your use case.

1 Like

Text generation - OpenAI API

1 Like

Got the same context tokens peak today


I am now testing function calling in Assistant with Gitlab and Slack API, I currently have 7 functions.

Are the functions definitions counted in context tokens in each chat interaction in a Thread ?

Function results are also counted in context tokens I suppose ?

Gitlab API JSON results can be quite verbose, I did a debug session with many functions call to test the feature that could explain this peak but still that’s huge!