Not fine tuning. Using samples along with a prompt to generate the answer for a question. Sadly can’t share more info as it is for a propriety software.
In my experience, the lower you go with temperature, the more likely it is to overfit/repeat… If you can’t raise the temperature, try moving frequency_penalty up a bit… but slowly… 0.05 at a time maybe?
This is something I have been tinkering with a bit, hopefully can find the sweet spot soon. I want the answer to be deterministic to a certain extant, so have been using low temperature and that definitely has led to overfitting.
Base model I’m using currently is davinci003 and has recently started using the gpt3.5 turbo. Am not too sure whether I should use a fine-tuned davinci would solve my problem.