I am currently working on an integration with the Davinci model, but the prompt I am using is quite large and complex. Only a few keywords within the prompt will be changed based on user input, while the rest of the text remains the same. Is there a way to store this template text on OpenAI’s end to reduce the cost of the prompt?
Hi,
I’m trying to do the same thing on my app and I’m thinking about using a trained custom model.
First you get a decent amount of { promt :“long prompt”, completion: “good response”} objects, then you transform them into a set of {promt:“keywords”, completion: “good response”} objects than you use as a training file for the current model.
This should “teach” the model to build up similar responses with only the few keywords as prompt instead of the larger prompt.
So far I haven’t reached 100 data objects so I’m waiting a bit longer before trying it out but If you already have the data I would love to hear about your results.
Hmm nice take on it. Have not used Fine-Tuning yet. Now I idea what it needs from me. Will explore this soon. Thanks for your input.
@Bestbubbldev - Youtube | Patreon | LinkedIn | Twitter