Haven’t been able to find anything in the forums along these lines.
I’d like to be able to continue a conversation with gpt3.5 form a ->chat() call without resending the past conversation back to the API which gobbles up tokens. I was hoping to use the api chat response IDs instead.
Otherwise to be honest, I’m not sure why it’s necessary to return such a detailed unique response Id. I could just as easily create a unique Id at my end or use a DB row ID. The mere fact an OpenAI unique ID is generated for each ->chat() response would indicate that responses could be cross-referenced back at OpenAi headquarters - perhaps for checking responses that go awry?
Bing Chat search also suggested that completions can be re-accessed using the ID like this: /v1/completions/[chatcmpl-000000000ID]). Not sure how to apply this in an API context or if it applies to /v1/chat-completions.
Is there a cheatcode that’s not in the documentation ? Shouldn’t it be possible to access previous responses using the chatresponseId in a subsequent api call?
If not, could anyone help me understand the utility of the the generated chat responseId? Does it correspond to logs kept by openAi?