1. Thanks OpenAI for releasing this great product, and great affordability.
  2. Many user has question about conversation session, session ID, chat history…
  3. I am learning how to tune ChatGPT AI for specialized purpose , such as (AMA session, customer service)
  4. Currently Openai 3.5-turbo (chatgpt API) not allowed to fine-tuning

Here is my interesting finding :


You can see the 2,3,4 API call, the Token Total is NOT equal to Prompt + Completion.
( Because for every user entered Prompt, I have “inserted” my context for this conversation (about 2874 characters )

But, it seem the repeating context I inserted, Do not factor in final count of Total Token.

i will test if input large amount of context, and carry forwant the context in every user prompt ( in same browser session )
How will it affect the token calculation.

1 Like

further study, the compare the completion with “actual response from ChatGPT”
it completion already included the “context”

because the 700+ completion token seem too high for the actual response itself.

Sorry for my mistake in 1st post.