Fine-tuning quality davinci vs text-davinci-003

I’m experimenting with using GPT-3 to convert some unstructured text into standardized JSON. When I test this with a completion on the text-davinci-003 model, asking it to convert to a JSON schema I provide, the model is intuitively able to complete it and gives very good results.

eg Prompt:

Convert the following text to this JSON schema: 
{
  "field": "string"
}

Text:  <unstructured text>

eg Completion:

{
 "field": 123
}

However, when I go to fine-tune a custom model, the only option is to use davinci (which as I understand, is a more basic version of text-davinci-003). I fine tune the custom model using the same unstructured text, with the completion being the JSON I want. When I give it an unseen prompt, the model responds with either an empty string, or some garbled text that’s irrelevant.

eg Fine-tuning:
{"prompt": "<unstructured text>", "completion": " <json>"}]

then running an unseen prompt produces an empty string, or garbled text.

Can anyone point me in the right direction here? text-davinci-003 seems to be more capable out of the box to handle this task. However, for my application, I need to train the model on certain phrases which text-davinci-003 won’t have knowledge of, hence the need for fine tuning. But the base davinci model seems incapable of generating JSON. I’ve only given a small number of training examples, do I need to provide more or change my methodology? What can I do to get this desired completion?

1 Like

Hey, welcome to the community.

the only option is to use davinci (which as I understand, is a more basic version of text-davinci-003)

From what I know, text-davinci-003 is a fine-tuned Davinci model specifically fine-tuned for “instructions”… which is why they don’t want us to further fine-tune it.

How many samples did you use in your dataset? What settings are you using for generation?

1 Like