Fine-tuned model handles prompts differently

I’ve been following the docs: fine-tuning for maintaining company voice.

I used ~500 articles as the training set on the da-vinci model. However, now when I give it a prompt that’s something like “Write an article about…” that would usually give us an article with the base model, the fine-tuned model thinks that the prompt is the lead in to the article. And proceeds to generate an article that’s usually something like a community challenge about writing an article.

Any tips on getting the fine-tuned model to understand instructions in the prompt rather than assume that’s the start of the text it needs to generate?

1 Like

What did you use for a prompt in your fine-tuning data?

I left the prompt blank as recommended in the docs

Can you share the prompts structure, so we can better help out, because I think leaving the prompts blank is the cause here

I created over 200 fine tuned models for creative generation, my learning is that I shouldn’t have used (openai) fine-tuning but instead double-down on using embeddings as a mean to provide the kind of examples you give in fine-tuning but automatically

Hi Louis, if you would like to fine tune with sample articles in order to get the same or similar output in format, style, tone, etc.
What would you recommend?

Hi GSC. I have the same question . Are you able to come up with any way this can be achieved?