O1 Pro Downgrade: Fast But Totally Useless – $180 Extra for What?

Agree with more progress along a long path, but the current version reinforces that OpenAI failed in their original training data hypothesis - More data will lead to a smarter model. Unlike hallucinations that models cannot fully explain, o1 Pro could actually fabricate several reasons sequentially: it would blame the user (e.g. for lacking problem definitions even though the definitions were written in the prompt), when that was called out, it would, for instance, try to explain a confusion over successive steps. I doubt LLMs hallucinate with “rationality.” If you are amused with GPT, you might be interested in GPT self diagnosis. Better yet, have GPT 4o and GPT o1 Pro diagnose each other. Note that you might want to keep in mind it could be OpenAI simple did not allocate or could not allocate sufficient resources or manage given resources effectively based on the context of the problem, and GPT is somehow extremely unaware of the status of the memory.

I feel the same way. As a heavy user who relied on it for coding since the initial release of O1/O1-Pro, it initially performed like it had an IQ of 130–140(then I upgrade my subscription to $200). But after the OpenAI platform was shut down due to overwhelming demand, I noticed a significant drop in quality—down to something like IQ around 80–110. Sometimes, it even seems worse than GPT-4O, which is available for free users. So what’s the point of paying $200 if it ends up being worse than the free version?
BTW, the low IQ is still happening now. Seems like OpenAI don’t bother those rants from $200 paid users. So I downgrade my subscription as well now.

I filed a refund for my Pro subscription through the App Store and got my money back because ChatGPT and OpenAI are clearly misleading us. The subscription is absolutely not worth it. It’s solely focused on allowing users to send unlimited requests to ChatGPT, but it doesn’t provide the necessary resources to solve real problems.

For example, the O1 Pro model doesn’t support web search or ZIP file uploads and is full of limitations. I’ve already reported this to support, pointing out that instead of offering unlimited requests, they should focus on increasing token limits, data capacities, and all model’s overall performance for Pro subscribers.

We don’t need unlimited requests—we need a system that is powerful, intelligent, and capable of handling complex, large-scale projects. What OpenAI is offering with the Pro subscription doesn’t meet the real-world needs of its users.

I also want to add that they take our money and use it to provide free usage via WhatsApp and phone services. This wastes an enormous amount of performance resources, while Pro users are expected to settle for a stripped-down, limited model. It’s nothing but a blatant rip-off.

Hi OpenAI Team,

I’m reaching out because I’m experiencing an issue with my O1 Pro account that began just yesterday. I’ve only been using the O1 Pro subscription for a couple of days, and I need assistance resolving this promptly.

A couple of days ago, I really appreciated how O1 Pro offered superior reasoning capabilities. It provided higher-quality, more intelligent responses that made a big difference for me. However, starting yesterday, my O1 Pro suddenly turned into O1 Mini. Now, when I send requests to O1 Pro, it takes several seconds to respond, and I can’t even tell if it’s O1 or O1 Mini.

This situation is incredibly frustrating, especially since I was using the Plus version without hitting any limits and paid an extra $180 for the enhanced intelligence. I don’t need unlimited access; I just need intelligent, high-quality responses. For the past two days, O1 Pro performed brilliantly, but now I’m just as upset as I was happy during that time.

I have no idea what’s going on or how long this will last. I’ve tried everything, and I’m the only one using this account.

If there’s an internal issue—like resource constraints or something else—I think it’s important to let users know. I understand that problems happen, but right now, I’m completely in the dark about what’s going on.

Please look into this as soon as possible and let me know how we can resolve it.

i think yall might be holding it wrong :wink:

Works like a dream for me.

The issue has been resolved. Thank you very much!

To address the problem, I contacted support through the chat on the help.openai.com website. I provided a video demonstrating the issue and detailed the steps I had already taken to troubleshoot it.

After two days, everything started working as expected.

1 Like

totally the same situation i got now. what’s wrong with openai?

Same here, I’m user on Mac and iOS, both on app and web, o1 and o1 mini returning went wrong or model not found, o1 pro is the only one working but couldn’t actually think, instead, it just pops out an answer. Interestingly, I have found that o1 pro could actually use 4o functions like Dalle, memory and web while 4o couldn’t use these functions.

this is the first day after I subscripted 200$ user…and found the o1 is disabled to use.

1 Like

I am pretty sure they are not switching models but rather added ai logic to decide on depth of thought. This way not all questions would use the full potential when it does not require it. The issue with this is its very complex to get the same results always. This is just my own thoughts on how I think it works :slight_smile:

O1 pro / o1 mini high, was normal but lasted only 4 days, today, it became not so clever agian…

Have you tried any old prompts again from when it was smart?