Assistants API Persist Context?

I’ve recently been learning about Assistant API. But this seems a lot like if I were to use say GPT-4 turbo (or any other model) API to do something. I want to understand the API better since I’m developing a software solution with it.

My question is: Do the instructions (context params) given to the Assistants API take up any token length or is it persisted throughout the context window?

Does that make sense?

Yes, you pay for all tokens in and out. There’s no “state” that’s maintained at this time.

1 Like

Ok gotcha. So then what is the difference between the Assistants API and creating your own ChatGPT?