Is it possible (and how) to access previous chat completions using the chat response Id

Haven’t been able to find anything in the forums along these lines.

I’d like to be able to continue a conversation with gpt3.5 form a ->chat() call without resending the past conversation back to the API which gobbles up tokens. I was hoping to use the api chat response IDs instead.

Otherwise to be honest, I’m not sure why it’s necessary to return such a detailed unique response Id. I could just as easily create a unique Id at my end or use a DB row ID. The mere fact an OpenAI unique ID is generated for each ->chat() response would indicate that responses could be cross-referenced back at OpenAi headquarters - perhaps for checking responses that go awry?

Bing Chat search also suggested that completions can be re-accessed using the ID like this: /v1/completions/[chatcmpl-000000000ID]). Not sure how to apply this in an API context or if it applies to /v1/chat-completions.

Is there a cheatcode that’s not in the documentation ? Shouldn’t it be possible to access previous responses using the chatresponseId in a subsequent api call?

If not, could anyone help me understand the utility of the the generated chat responseId? Does it correspond to logs kept by openAi?

1 Like

GPT-based language models are “stateless” there’s no memory of the past.

If you want the model to know something in its next response, you need to send the tokens, otherwise it won’t be able to access it.

There’s no “free lunch” to be had.

See my reply in your other thread, please avoid multiple cross-posts in the future

2 Likes

Thanks Nova, much appreciated.

I also found this answer What is the completion id, what can it be used for?, which aligns with your response.

Cheers

for sure - stateless re the gpt model… but for misuse, and other purposes, I was hoping there might be a separate readable log :wink: