I’m using ChatGPT Plus. For the past 3 days, the o1 model no longer appears in the menu. Even in the upgrade plan section, it doesn’t mention “o1” anymore.
However, according to the documentation, Plus users should have access to the o1 model — and I did have access before.
I’ve cleared the cache and tried different browsers, but still no success.
But o1 had a usage limit refresh on a daily basis. i think thats not the case with o3 mini model usage correct? If so are o3 mini models comparable to performance with o1?
I’m not an OpenAI staff member. I’m just a user on this forum like you.
On ChatGPT, I don’t think so. When a model is removed from ChatGPT, a new one usually takes its place. I don’t think they bring old ones back to the ChatGPT interface.
As you’ve seen, these models are already gone or are going soon on ChatGPT:
GPT-3.5
o1
o3-mini
o3-mini-high
GPT-4 (will be removed on April 30, 2025)
But you can still use them through the API.
The API is separate from ChatGPT. You’ll need a different account just for API access. You can use the same email as your ChatGPT account or a different one if you want.
There’s no subscription with API. It’s pay-as-you-go. You only pay for what you use, based on how many tokens the model processes.
One thing to know though: not all users get access to every model right away. It depends on your API tier. Higher tiers may unlock more models, while lower tiers might be limited to just the basics.
Model o1 is the best and most accurate I have used so far. I got to test 4.5 this morning and it was not bad at all, but it was gone from the interface by noon. I’m really disappointed with these latest moves from OpenAI. I hope they fix this mess. o3 is just a revamp of 3.5, far inferior compared to o1 or even 4o.
Yea for complicated reasoning (helping me with older legacy projects that are messy) o1 was the best and really what kept me using chatgpt over others, now that its gone and o3, o4-mini-high are really not cutting it, getting many details wrong and require a ton of headache to figure out how to get its outputs working. Really missing o1 right now.
As a ChatGPT Pro($200/month) user, I agree with those who miss the o1 model.
The accuracy of o3 is significantly lower, and it frequently repeats detailed errors, leading to a noticeable decline in overall quality—this is clearly a downgrade.
I’m currently using ‘o1 pro mode’ as a temporary workaround, but if that disappears too, I’ll be canceling my subscription.
Frankly, o3 feels less like a reasoning model and more like a hallucinating fiction writer.
It repeatedly loses track of context and invents details that were never provided.
Really upset about this. The o1 was the model I used the most, I have several projects in progress and facing big problems without the o1. The o3 and o4 models are horrible. I had to go back to using the 4o, which suffers from crashes and context limit.
I totaly agree with all of you. i am a 200$ pro user and only made the subscription because of o1. I use it for coding projects. think it was planned to step back and push “copilot” in github.
Right now I feel like loosing an employee…
o1 and 4.1 are available in Copilot but not the same like i used it before in projects.
Cancelled the subscription for ChatGPT due to the frustration with generating/correcting/modifying code with other available so-called better models in ChatGPT. Eventually, have shifted to Gemini and it seems to work great.
Tried Claude Sonnet 3.5 and 3.7 before Gemini, and it can’t output the full code if the code is too long and there are bugs and crashes while generating a long piece of code.
I am also thinking of giving up chatgpt. I’ve been using o3 daily now as i was using o1 in the past… and I am forced to cut down my text for reviews to 400 words whhere in the past I could give o1 2000 easily. And replies are sooo much worse, full with mistakes not even children would make!
Give us o1 back! even if it is “older model”. it was smarter than anything else you want and reasoning was good, human comparable. not this of o3!