After three weeks, exactly the same response - no new information, no concrete update.
Hi OpenAI,
I kindly ask if it’s possible to return to the previous models o1 and o3-mini. Unfortunately, the current models are not suitable for my workflow - performance and compatibility issues are impacting my work significantly.
However, I fully accept and understand if these changes are part of a long-term shift toward resource efficiency. I just need clarity:
- If this is a temporary situation and improvements or fixes are coming soon, I can wait and adapt.
- But if this is the new permanent standard, I will understand and begin searching for alternatives that fit my coding needs more effectively.
Thank you for your time and your work.
Best regards,
Hello,
Thank you for coming back to OpenAI Support.
We understand that reliable performance and compatibility are vital to your workflow, and we genuinely appreciate you sharing your feedback on the recent model updates. Please rest assured, we’re here to help clarify your concern.
We’ve shared your input with our product team, as feedback like yours is invaluable in helping us continue to improve the experience for all users.
We hope this information helps! We truly appreciate your patience, and if you have any further questions or concerns, please don’t hesitate to reach out—we’re happy to assist. Have a wonderful day, and please remember to stay safe!
Best,
OpenAI Support
I think, probably, they will not bring o1 and o3-mini on ChatGPT UI because they going forward for new models.
But you can try using the API, there are many models available there.
https://platform.openai.com/
The API is different from ChatGPT. You have to make a new account and add some money to it.
Instead of a monthly subscription, you pay based on how many tokens you use.
You can see all the available models here:
https://platform.openai.com/docs/models
1 Like
Hi polepole,
Everything has been restructured - the API, Custom GPTs in the dashboard, and even the original ChatGPT models. It just doesn’t make sense anymore. The ultra-aggressive filtering and resource throttling have made everything worse.
It is what it is - I’ve already tried everything: API, custom setups, you name it. I even considered switching to API-only (especially since the credit expires after a year), but unfortunately: same problem.
There’s no workaround - no trick to get the old behavior back.
1 Like
I get how you feel, it’s really frustrating.
Also, on this forum, you probably won’t get a reply from the OpenAI ChatGPT team.
This forum is for the OpenAI API to help developers building with the OpenAI API Platform, and the ChatGPT team isn’t usually here, if they are, it’s very rare.
1 Like
Believe me, they definitely monitor what’s going on here regularly - but they don’t respond. It’s not a good feeling to be left in the dark, and it’s becoming increasingly clear that dependency inevitably leads to negative consequences sooner or later.
Even if the situation improves at some point, the smarter long-term approach is to build your own local strategies - rather than relying on a system that has already proven to be unstable or unreliable.
Because even if things get back to normal - when will the next restriction come? When will the AI once again refuse to write code or complain that the input is “too long”? After 5 months? A year? Two?
It will happen again - and that’s exactly why depending on it is dangerous.
I think you’ve asked the right question in the right tone—and it deserves a clear answer. If the shift to o4-mini and the newer architectures is permanent, users like you need to know now so you can adapt your workflows accordingly. These model variants are demonstrably different in reasoning, coding stability, and latency behavior, and it’s fair to ask whether those changes are by design or a temporary trade-off. A version reversion or model tier fallback (even as a non-default option) would solve most of what you’re facing. At the very least, OpenAI should state plainly whether o1 and o3-mini are returning or not—because developers are already adjusting their long-term toolchains based on the silence.
1 Like