Any idea when gpt-4 fine tuning will be released for everyone?

For me 3.5 turbo fine tunes often run into infinite loops or cant handle complex scenerios

Are certain countries going to get access first?

Hoping to get more details about this

1 Like

I don’t believe that ChatGPT-4 fine-tuning will ever be available to everyone. My belief is that it will only be for businesses.

I don’t think there’s a general timeline for this, but you can request access to fine tuning for GPT-4 through the fine tuning interface over here:


Keep in mind that you can only request access once you are eligible.

Can we verify this?

I’ve been able to request access since the announcement during the keynote, so I’m assuming everyone is able to do so :thinking:

Nope, I can confirm that I did not receive access. I also had done no fine-tuning. I only started fine-tuning yesterday.

1 Like

I suppose I have been “invited” because it says I can fine-tune GPT-4.

Been using fine-tunes for awhile now.


It looked that way for me as well, but I got a pop-up when I tried selecting the model :thinking:

Yep, same here. I get the “Request Access” green button.

1 Like

What the first step of access looks like, turned on from your previous fine-tune activity (maybe before devday):

The second is you providing form data of sufficient intrigue.

GPT-4 Fine-tuning Interest Form

GPT-4 fine-tuning is being developed through an experimental access program. We are eager to understand your specific use cases for GPT-4 fine-tuning to better tailor the product to your needs. Please fill out this form with details of your intended application. We will notify you as soon as the product is ready for wider use

So still waitlist-like.

1 Like

“Ever” is a very long time. With cost of computer being cut in half every 3–4 years (roughly an order of magnitude every 10-years), I can’t imagine GPT-4 (or an equivalently capable model) not being offered for fine-tuning within the next decade.

While “someone between now and the end of 2033” isn’t exactly a helpful answer, I think it’s far more likely than never.

Now, with all that said, I think the major problem with GPT-4 fine-tuning is one of cost and who is going to actually use it. If training costs something like 8x the cost of input and use is about 3x the cost of using the base model, then something like GPT-4-32K would be about $0.48 / 1,000 tokens to train and $0.18 / 1K input tokens and $0.36 / 1K output tokens.

I think opening it up first to orgs with a proven fine-tuning track record, makes a lot more sense than opening it up to everyone just so a lot of inexperienced users can try fine-tuning with insufficient quantities of low-quality training data to create versions of the model they’ll never use and OpenAI needs to store indefinitely.

It just gums up the works for everyone and leads to a lot of unhappy people—especially those who wasted good money trying fine-tuning a bad model.

So, to revise my “within the next ten years” answer, I would say they will open up GPT-4 fine-tuning to everyone after the vast majority of their current heavy users of fine-tuned models have transitioned to using GPT-4-based fine-tunes and when the price offered to the consumer to create a fine-tuned GPT-4 model can be offered at under ~$0.05 / 1K training tokens.

My wild, zero-insider-knowledge, completely speculative guess as to when that will be? 12–18 months, though I would be happy to be wrong if it happens sooner.

I don’t have gpt-4 in my options.