We have a compendium of about 40k reviews that we fine-tuned ChatGPT on. The base model is Davinci, and the end result seems to be pretty much what we wanted - the model returns completions back that seem to be taken from the reviews.
However, when you then give it a second prompt that is a bit more generic, like for example “And what do other people think of this?”, it seems to lose context on what it was saying before, and jumps to something completely different.
I guess I would like some guidance here. Every time I send an API request with my prompt, I am only sending the current prompt, and not the whole conversation for context. I assumed doing that would hike up the price quite quickly. What’s the best practice to force it to remember the context of the whole conversation?
Thanks in advance.