Access to GPT-4 finetuning

Hi - I am wondering about how fast OpenAI is rolling out access to GPT-4 finetuning. Has anyone here gained access yet or knows something in that regard? I was able to request access in the finetuning UI but have not heard anything since early November.



Fine-tuning for GPT-4 is in an experimental access program - eligible users can request access.

experimental — eligible users will be presented with an option to request access in the fine-tuning UI

Hi Fran - Yes, thanks. I am aware of that. I can and did request access but did not receive any feedback yet and it is still not available to me as an option when I finetune. So I wondered if others had been granted access already.


I requested weeks ago, zero response

But those who know people at OpenAI & insiders have access

1 Like

How I would know if I have access to Fine-tuning for GPT-4?

Are you able to request early access here? If you have, you’ll see the option here:

Hoping openai opens gpt4 fine tuning up broadly asap.


After training a couple of 3.5 models I got access within like a week.

As long as you have already done some training of other models, you should get access pretty quick. That is if you haven’t already.


Pretty sure that if you have fine-tuned 3 GPT-3.5 models, then you will get access to apply to get access to GPT-4 fine-tuning.


I don’t know of a single person who’s gotten access - even with months of fine tune usage

I’ve saw this some time ago and after that concluded that it is unlikely that there will be a large roll-out anytime soon. Personally, I requested access twice and never got anywhere with it. But if it’s really not yielding any major improvements yet, then that’s probably for the best.


1 Like

Seems some people & ‘partners’ do have access now…

please openai generally release this for all of us



What was your total usage before you received an opportunity to even select request gpt 4?

I think (but don‘t know for sure) that it‘s based on whether you‘ve done fine-tuning before. I fine-tuned approximately 15-20 times prior to the point at which access became available with data sets of varying size. Some of that was just for experimental purposes and only a smaller subset of fine-tuned models was used permanently in production.