Conversation ID Support is Missing

Would like to be able to keep track of chat history so that my users can maintain a fluid conversation with the chatbot. The Node.Js gpt-3 api doesn’t provide this capability to add conversation or parent message id…

2 Likes

You need to add a database into your mix. Record the ins and outs for each user to feed into the next request. Just cap it to prevent token overflow.

2 Likes

Being forced to cap historical conversation as input is the main issue.

What you mean by “cap it” @curt.kennedy @infinite

I agree we miss a kind of “chat session” where the model keeps persistent data.

If you want to re-send all historical data of the conversation (just the last text, capped to the model’s buffer size) you’ll pay a lot of duplicate tokens, and OpenAI will use more computing power for duplicate information to process.

OpenAI has already set up that stuff in the interactive web mode, each conversation has a GUID as an identifier. Hopefully they’re already working on it.

OpenAI seems unlikely to add conversation memory to their API… At least not directly… Why? Because other than the cost aspects (they would be paying for storage) they would also be on the hook for storing Personally Identifiable Information (PII) which is a no go… They would need all sorts of compliance certifications which they’re not ready to deal with yet…

Every API call needs to process whole conversation again.
So yes, you need to pay for the whole thing again.
My guess is even if they added some way to store conversation, I dont see why you would not have to pay for it anyways. Since it needs to be processed within the models token limit. And is going to use the same resources.
Its just how it works.
If you dont process the “duplicates” again, how should the model know about it?
Otherwise with your every API call, it would basically have to create a completely new GPT model where your past conversation is inserted into the model.

The good thing is that you have full control over how the past conversation/memory is handled.

ChatGPT is just an app using the API
You can build an app storing conversation ids just like chatGPT
And you have to store it somewhere
When you use that conversation again, you again send the whole conversation with next prompt
ChatGPT most likely also sends the whole conversation with every prompt. (although it might be doing some summarization, hence losing control)

I think some of what you are asking is a bit similar to fine-tuning a model. But thats not available for 3.5 or 4
But I am not sure if it was possible to use it to quickly create fine tuned model within conversation and then switch to it every prompt.
I think its more about giving some instructions and role and then creating fine tuned model where you dont have to send that info again. But then you still have to send the whole conversation again with each prompt, just not the instructions or whatever you fine tuned.
(fine tuned is more costly than regular too)
I think many are using embeddings instead, as they mention its better than fine tuning

By ‘cap it’ I meant limit the history you send to the model. So drop off the old data, or use embeddings and correlations to feed the model the most related data based the current input.

As long as finite windows exist, you need to limit the data. Or go with infinite window RNN’s but they don’t seem to perform as well as the GPT style transformer models.