Calculation of input token of assistant API

How is the input token of the assistant’s API calculated?

https://api.openai.com/v1/chat/completions
Using this, I can reduce the cost of input tokens by adjusting the input.

That’s how it’s currently being used.

Does the Assistant API work the same way?

I haven’t tried the Assistant API yet, but I don’t see a way to control input.

Can the assistant API also control input tokens like complations? I don’t understand the input token of the assistant API.

//////////////////////////////////////
Request body

role string Required
The role of the entity that is creating the message. Currently only user is supported.

content string Required
The content of the message.
//////////////////////////////////////

Is the input token simply determined based on ‘role’ : ‘user’ ?

How are lists from previous conversations reflected?

  1. Does the output automatically remember previous conversations?

  2. Are messages from previous conversations included in the input token?

thank you

The assistants endpoint does not have context token count adjustment at this time, it’s something that is planned for the next update, no timeline on when that might be though. For now the context may grow up to the maximum 128k tokens if the retrieval system determines that is required.

2 Likes

oh Isee
I hope that happens soon
Thank you :slightly_smiling_face::upside_down_face:

1 Like