Fine-tuning: This new update is going to give me nightmares, how's everyone feeling about it?

Hey everyone,
2h hours ago OpenAI announced that they are retiring some models and man, I do have quite a few things running off some models that are going to be retired.

I guess changing things from the Completions route to the ChatCompletions route isn’t the worst thing in the world, I’m a bit more concerned about the prices as some older models could handle fairly well some tasks and save some money or give projects the possibility of offering a free and a paid version.

Also, isn’t the model text-davinci-003 the equivalent to GPT-3? If so, are models going to be available for 2 years and than retire? I’d love to understand better why but I have a feeling more will be explained as we get to January 2024 in a few months.


I was hoping instead of retiring models, we could have the price per token to be reduced so more ambitious projects with monetary constrains could be experimented.


I’m a bit anxious about fine tuning now, after all, don’t we all just have access to text-davinci-003 fine tuning? so if anyone is working on fine tuning they will 100% go through the transition? maybe 3.5-turbo fine tuning was released and I missed it.

Feels a bit like python2 to python3 days.

So, how is everyone feeling about it?

2 Likes

Yeah, I have read a lot of complaints about it. I think in the long run we will benefit from improved models.

2 Likes

How do i know if my api key is gpt3 or gpt4 ?

API keys work for all models you have access to.

1 Like

I suggest going over the documentation, try using gpt-3.5-turbo first, once that works, change the model field to gpt-4. GPT-4 works on the ChatCompletions route, but explaining before reading the documentation can be overwhelming.

There are many ways to do this, but the safest route is to follow the documentation as it’ll guide you quite nicely.