Haven’t been able to find anything in the forums along these lines.
I’d like to be able to continue a conversation with gpt3.5 form a ->chat() call without resending the past conversation back to the API which gobbles up tokens. I was hoping to use the api chat response IDs instead.
Otherwise to be honest, I’m not sure why it’s necessary to return such a detailed unique response Id. I could just as easily create a unique Id at my end or use a DB row ID. The mere fact an OpenAI unique ID is generated for each ->chat() response would indicate that responses could be cross-referenced back at OpenAi headquarters - perhaps for checking responses that go awry?
Bing Chat search also suggested that completions can be re-accessed using the ID like this: /v1/completions/[chatcmpl-000000000ID]). Not sure how to apply this in an API context or if it applies to /v1/chat-completions.
Is there a cheatcode that’s not in the documentation ? Shouldn’t it be possible to access previous responses using the chatresponseId in a subsequent api call?
If not, could anyone help me understand the utility of the the generated chat responseId? Does it correspond to logs kept by openAi?
Folks - I am chiming in a bit late. But there is a lot of confusing info in the blogs, API and these forums. OpenAI has not done the best job of describing how id is used or not.
I performed a simple test on ChatGPT:
copy and paste a 1200 word article
chat GPT summarise that article
ask 5-6 questions on the article and get responses from ChatGPT. Try and cross the 4096 token limit.
copy and paste a 1200 word article
CatGPT summarises it.
ask a few more questions on the second article. Try and cross the 4096 token limit.
now ask something about the first article you posted - and ChatGPT will tell you that it needs more context.
My guess from all this is that even ChatGPT uses the last few hundred tokens and then loses context over time (4096 limit).
If you need to continue talking about a long form article (say 5000 words) at length, then you’ll have to keep providing that same context to ChatGPT over and over again. From an API perspective this means that your token count will go through the roof!
Hope this help and I’d love to know if you’ve reached the same conclusion.