Fine-tuned models and continuing conversation contexts

We have a compendium of about 40k reviews that we fine-tuned ChatGPT on. The base model is Davinci, and the end result seems to be pretty much what we wanted - the model returns completions back that seem to be taken from the reviews.
However, when you then give it a second prompt that is a bit more generic, like for example “And what do other people think of this?”, it seems to lose context on what it was saying before, and jumps to something completely different.

I guess I would like some guidance here. Every time I send an API request with my prompt, I am only sending the current prompt, and not the whole conversation for context. I assumed doing that would hike up the price quite quickly. What’s the best practice to force it to remember the context of the whole conversation?

Thanks in advance.

You must send at least a summary of the prior prompts using the OpenAI API. The API does not “remember” earlier prompts, unlike ChatGPT. Remembering is a feature of the ChatGPT application. ChatGPT is an application. The OpenAI API is not an application, it is an API for you to build your own applications.

I assume that this feature may be available when the ChatGPT API is released, so maybe just wait until then. It’s coming soon.

1 Like

Thank you very much for the quick reply. That makes sense. I’ll try to send a summary with every prompt, and see if that helps