Fine Tuning text-davinci-003 Models


I have recently been testing the base functionality of each model to see which would fit my application, and it is very clear that out of the box the text-davinci-003 model is significantly better at every response it makes (this is a chat style project).

The goal is to tune the model with a set of prompts related to a few topics, however, we can only generate a few hundred maybe 1000 for the topics at hand - so we need a robust base model to work off of.

I was able to successfully train a Davinci base model on my window machine after some difficulty, but now I notice it responds horribly to most prompts when compared to the 003, and the same as the original davinci.

Is it possible to fine tune on top of the text-davinci-003 model? if so, how do we designate it iin the tuning command. If not, why not? This seems like a great deal of lost potential, and somewhat limits the ability to use this for my application without extensive training almost exactly like 003.

We want something exactly like 003 but only with a small amount of information added on top which it will remember consistently across all interactions. This seems like it would be commonly helpful for tons of applications, so I may be missing something

Thanks for the guidance,

1 Like

Forgive me if im wrong, but davinci-003 is the base davinci model fine tuned already with a shed load of data. From scratch currently I believe you can only fine tune the base D model. Being able to add-on-to 003 would be something im sure we’d all like right now. Or better yet ChatGPT

1 Like

@drinkingteddy You are correct.

@potential.planetary Take a look at the docs for fine-tuning. It explains which models are available for fine-tuning.

Every fine-tuning job starts from a base model, which defaults to curie. The choice of model influences both the performance of the model and the cost of running your fine-tuned model. Your model can be one of: ada , babbage , curie , or davinci .


Very unfortunate. The main benefit of having ChatGPT or 003 would be to be able to, quite literally, ‘Fine-tune’ it to our needs. Instead, we are most certainly “coarse” tuning a completely bare model - which would cost time and money well beyond reasonable for the scope of projects it seems OpenAI is aiming for.

Rougly 99.999% of a training model we would hope to use would be ideally the exact same as either 003 or chatGPT. Paying to have these retrained wastes resources on both ends, though I guess it is a source of income. If we cannot tune from 003 can we buy the prompts off of OpenAI and run the training ourselves, again? Seems repetitive, but would solve the problem. My application is chatting, so there needs to be a lot of fundamental prompts on how to chat and create conversation - then on top of that, i need to feed it some basic information about the topic at hand. It would also make it much easier to start working on the applicable ‘memory’ of the chatbot, and make prompts specific to the topic at hand which it will use for every future prompt rather than spending all the time tuning its ability to talk fundamentally.

Is it possible to request a pre-tuned model of 003 to work off of? or Purchase one?

I can understand if the dataset it used was so vast that tuning on top of that would be too much, but adding 100-2000 prompts about our specific needs would mean NOT having to define these conditions every time we call the API up, and not having to train the base model with millions of prompts in the exact way OpenAI did. Either this is a method to get more value out of those paying to fine tune from the base, or its a tech issue I am too rookie to understand.

Really bums me out. Seemed like it had a lot of potential until I found out this one thing. I am not sure of the technological hurdles preventing this, but from it seems, without a team or tons of time to work on making your own prompts from scratch, what we end up buying is a basic chatbot and paying for its operation - instead of the ability to fine-tune the technology OpenAI worked so hard to innovate through prompt development.

Or, perhaps, Is there a larger tier of service that includes this? Seems like something they would make available to companies at a minimum. (IE: Walmart, take our great chat bot and give it the info about your products.)

Thanks and sorry for the pessimism. :smiley: I read the note Jacob posted before diving in to this project, but assumed it meant ALL versions of those base models and not just the base ones. It really made no sense to me why you would include Davinci and not the improved Davinci… Really cool tech, just not as setup for personal/small funding use as I had hoped. Maybe in a few years or if they hear our cries for access to a 003 tuning model. I rather have that with tuning than ChatGPT because both without tuning means they can only serve very general purposes.

Thanks for reading the wall! Hopefully some of you experts might have advise on how I could proceed, if even viable.

Im guessing it’s just a matter of time.

I have a similar application and have kind of got a decent working solution. If you code then you can use the context ability of text-davinci-003 to drive the flow of the conversation, especially in a business environment with a predefined subject matter. If you can carry the flow you don’t need to load up the one-shot prompt with all of the info all of the time. You can also tell the bot to do this or do that and if the customer says this or says that respond like this in some custom way. Its a fairly chunky workaround than being able to fine-tune the fine-tuned D003 but we are where we are. Still cool.

Me: Why is it that on OpenAI’s website there are only options for finetuning the base models rather than the upgraded ones?

ChatGPT: OpenAI’s website provides the option to fine-tune the base models because they have been trained on a broad range of text data and are well-suited for a wide range of use cases. Additionally, the base models are less computationally expensive to fine-tune than the upgraded models, making them more accessible to a wider range of users.

However, it’s worth noting that OpenAI’s API allows for fine-tuning the upgraded models such as “davinci-03”, as I’ve explained in my previous answers. The option to fine-tune the upgraded models is not as visible on the website as the base models, but it’s possible to do it via API.

It’s also important to keep in mind that fine-tuning the upgraded models such as “davinci-03” is more computationally expensive, and it requires more data to train on, also the cost of fine-tuning an upgraded model is higher than using a base model.

Therefore, it’s important to consider the cost and the benefits of using an upgraded model before fine-tuning it, and check the pricing plans available on OpenAI’s website or contact their sales team to get a quote for your specific use case.

So I guess what we can take from this is that it is available?

1 Like

For the OpenAI API can only use the following model names (currently) when you fine-tune:

  • ada,
  • babbage,
  • curie,
  • davinci, or
  • a fine-tuned model created by your organization

It tells lies. If you say “No you can’t”, it will likely say “you’re right, you can’t, Im sorry”.

Thank you so much for these details! It does confirm my suspicion regarding the excess expense and viability of its wide adoption via API. I saw an announcement about ChatGPT potential on the way, and hopefully, with some level of ‘memory’ or the ability to add small prompt sets on top of the improved models.

Could you expand on this? What do you mean by the “context ability”?