Since switching to o3-pro I’ve noticed that each request now takes around 15 minutes to complete. By comparison, the same tasks on o1-pro typically finished in 1–5 minutes. What concerns me is that:
No clear quality gain – The answers I get from o3-pro feel roughly on par with what o1-pro delivered.
Frequent failures – I occasionally receive incomplete responses or outright errors after the long wait.
Effective price hike – Paying the same subscription fee for much slower turnaround feels like a stealth price increase.
I understand that total latency isn’t just model inference time, but a 5- to 10-fold slowdown is hard to justify in day-to-day development work.
Questions for the community (and any OpenAI staff reading):
Is this extreme latency just a launch-period bottleneck, or should we expect o3-pro to remain this slow?
Are there best-practice prompts or workflow tweaks that noticeably reduce o3-pro turnaround time?
Has anyone measured a real-world quality boost that makes the extra wait worthwhile?
Thanks in advance for any insights or work-arounds. The current delay is a major drag on productivity, and I’d love to know whether it’s likely to improve.
I completely agree – at this point, there’s absolutely no incentive to stick with the Pro model. The slowdown with o3-pro has been a huge drag on productivity, and I haven’t seen a meaningful improvement in quality to justify the wait times. If anything, the frequent errors and timeouts make it feel like a downgrade. I’ll give it one more month to see if performance improves, but unless something changes fast, I’ll be canceling. The experience right now just isn’t worth the cost.
I totally agree. When I asked o3 Pro a question, it took more than 24 minutes to answer. Around the 15-minute mark it looked like the model stopped reasoning, but it eventually came back with a reply.
It’s simply a Deep Research Pro that has merely been renamed as a Deep Thinking model. After all, the O3 Pro lacks both image processing and canvas functionality, it thinks extremely slowly, and it doesn’t show any intermediate steps. It’s clearly a Deep Research Pro, not the advertised O3 Pro. That’s pretty outrageous. And this is the kind of company that’s supposed to realize Project Stargate?
It’s completely useless in everyday use and a scam for us Pro users, who went for weeks without a Pro model, paid to get one, and received this instead. You can’t work with it—it takes minutes, always more than 10 minutes. There are no answers under 10 minutes; it’s more like 20 minutes. Even if you ask something simple, like how it’s doing or what the weather is, it takes 15 to 20 minutes to respond—and sometimes the answer is even wrong or in a different language.
Where’s the accuracy? It takes 17 minutes to think when I ask what my hometown is called. Listen to me—don’t get the O3 Pro, don’t pay for it. OpenAI is completely scamming us. The Research model was simply relabeled and repackaged as a Thinking model, and we’re being ripped off on every level.
What we wanted was an O3 Pro—a real O3 model like the one that actually worked great, just faster and smarter. That’s all we wanted. And what did we get? A forever-thinking model that’s completely unusable in daily life and only solves things through Deep Research. As a Pro user, I can tell you that you get exactly the same results from Deep Research as you do from O3 Pro. Ask Deep Research the same things as O3 Pro and you’ll get identical answers. This is total fraud against us customers—I guarantee it.
Thanks for your detailed breakdown. That’s a very interesting theory.
I’ve never used the “Deep Research” feature myself, but what you’re saying makes a lot of sense. Funnily enough, I noticed web search capabilities appearing in the o1-pro model just a few days before o3-pro was officially announced.
To be honest, I’m on the fence about how much web search actually improves the quality of the model’s reasoning. Personally, I’d rather get more insightful and creative answers derived from the model’s own “associative thinking” instead of a copy-paste of what’s already being discussed in communities.
On that note, I’m currently testing out Gemini 2.5 Pro, and I think it’s highly likely I won’t be renewing any of my GPT subscriptions
Yes, above all, the Gemini 2.5 Pro is still free for certain queries, especially if you only make a few queries a day. That’s pretty remarkable. And overall, it’s much faster. That’s exactly what really annoys me about the O3 Pro. I was looking forward to a faster, smarter model like O3. And what did we get? An O1 Pro crutch in a new outfit. Once again restricted on all fronts. I’ve now canceled my OpenAI subscriptions and only keep the Plus subscription so I can use Whisper. Because I really like OpenAI’s voice-to-text feature.