Is it possible (and how) to access previous chat completions using the chat response Id

Haven’t been able to find anything in the forums along these lines.

I’d like to be able to continue a conversation with gpt3.5 form a ->chat() call without resending the past conversation back to the API which gobbles up tokens. I was hoping to use the api chat response IDs instead.

Otherwise to be honest, I’m not sure why it’s necessary to return such a detailed unique response Id. I could just as easily create a unique Id at my end or use a DB row ID. The mere fact an OpenAI unique ID is generated for each ->chat() response would indicate that responses could be cross-referenced back at OpenAi headquarters - perhaps for checking responses that go awry?

Bing Chat search also suggested that completions can be re-accessed using the ID like this: /v1/completions/[chatcmpl-000000000ID]). Not sure how to apply this in an API context or if it applies to /v1/chat-completions.

Is there a cheatcode that’s not in the documentation ? Shouldn’t it be possible to access previous responses using the chatresponseId in a subsequent api call?

If not, could anyone help me understand the utility of the the generated chat responseId? Does it correspond to logs kept by openAi?

1 Like

See my reply in your other thread, please avoid multiple cross-posts in the future

2 Likes

Thanks Nova, much appreciated.

I also found this answer What is the completion id, what can it be used for?, which aligns with your response.

Cheers

for sure - stateless re the gpt model… but for misuse, and other purposes, I was hoping there might be a separate readable log :wink:

Folks - I am chiming in a bit late. But there is a lot of confusing info in the blogs, API and these forums. OpenAI has not done the best job of describing how id is used or not.
I performed a simple test on ChatGPT:

  1. copy and paste a 1200 word article
  2. chat GPT summarise that article
  3. ask 5-6 questions on the article and get responses from ChatGPT. Try and cross the 4096 token limit.
  4. copy and paste a 1200 word article
  5. CatGPT summarises it.
  6. ask a few more questions on the second article. Try and cross the 4096 token limit.
  7. now ask something about the first article you posted - and ChatGPT will tell you that it needs more context.

My guess from all this is that even ChatGPT uses the last few hundred tokens and then loses context over time (4096 limit).

If you need to continue talking about a long form article (say 5000 words) at length, then you’ll have to keep providing that same context to ChatGPT over and over again. From an API perspective this means that your token count will go through the roof!

Hope this help and I’d love to know if you’ve reached the same conclusion.

Thanks!

1 Like