I’m trying to keep the context using the API, but I’m having trouble with the conversation_id.
As a workaround, I’m currently saving all previous conversations in Message (Role and Content), and then using that information the response is OK regarding the context.
However, if I have too many previous conversations, ChatGPT’s request limit becomes a problem, so I need to keep my new questions short.
Do you know of a more elegant solution to maintain context in this situation?
Here are a few strategies. The first seems to me the most useful, but your application may need be better served in another way.
When the conversation grows to a certain length of tokens, get the AI to summarise the conversation (or perhaps the earliest part of the conversation). Replace the messages which have been summarised with the summary and use it as the context input for the next message. The advantage here is that you can keep summarising over and over, so the conversation will keep going. But the longer you go on, the more details will be forgotten. Another advantage is that you can set the max length of a conversation’s set of messages.
Alternatively, you could use a model that allows for more tokens. (See here for documentation on the max tokens for each model.) This gives your bot more “memory”, but there will still be a hard limit.
Do you need all the previous messages? Could you keep only the most important ones? Testing the responses given by the bot to get only the salient ones could allow you to keep only the most important information in a conversation.