Fine tuning on Da-Vinci-003

I’ve been using Da-Vinci-003 to classify and extract text from academic medical publication. It works amazingly well considering it hasn’t been specifically tuned in this domain but in my field we need to aim for “nearly perfect” rather than “amazingly well”. I’ve tried using some of the older Da-Vinci models for this purpose but the quality of responses and token limit are a major problem. Being able to fine tune Da-Vinci-003 would be an absolute game changer for us; does anybody know if openAI are planning to release this capability at some point? I’m sure I’m not the first person to ask!

1 Like

Sorry, according to @logankilpatrick, there are no plans to add text-davinci-003 to the “fine-tunable” models.

As you have learned, the only tunable davinci model is named davinci.