Finally I had the time to properly test o3-pro… and honestly? in my use case, it is less capable then o1-pro in terms of quality and length of generated code… while o1-pro was capable of around over 500 lines of high quality one shots of correct code I found o3-pro struggling to go over 500 lines of code and it is usually comes with mistakes…
sure, its 80% cheaper on the API, but it takes way longer AND the quality is lower… I wouldn’t complain or ask for o1-pro’s return if the quality of code was better…
Now, the only reason I had the pro plan on chatgpt just went away… are there any plans to bring it back?
i too miss o1 pro Thought this model was perfect. eveng gave you a couple of minutes to take a couple of breaths. now with o3 too many breaths begin to cause anxiety.
o3-pro is horrible and does not compare to how accurate and almost flawless o1-pro was. I’ll be keeping an eye out to see which other models can compete and making the switch.
100%. I’ve been using o1-Pro for months, it was fantastic, smart, mostly clear.
o3-Pro is completely unusable to me. I can’t have any sort of advanced reasoning conversation with it at all, it forgets something I said 2 messages ago, it can’t understand context, it can’t code properly pretty much at all.
I’ve had to downgrade unfortunately, it’s completely terrible. I know many people cancelling because of it.
The main issue for it for me, is send a message or whatever I’m doing to o3-Pro, it takes 15 minutes to reply, the reply is completely wrong, I try to adjust the context, but then I got to wait another 15 minutes just to see the reply is wrong again. I’m not sure who signed off on this, but they should reverse the change, take this model offline, and reinstate 01-Pro.
I have been using the o3 pro and my views are just quite the opposite. It outshines o1 pro, provides probably the most accurate answers (its usage of tools makes it amazing) I have ever had among any other models, edges well beyond even Gemini 2.5 Pro and it is now my go to for all of my highest stakes work.
While I did “miss” o1 pro initially, I am really satisfied with the o3 pro. It maybe a bit wonky some times (just refresh our browser and the output will show), but it really lays out many very precise key aspects that other models are unable to.
Can you give a brief context of your usage and also key commands? Maybe some of us are going wrong commanding it. My experience so far is also disappointing given the responses are very inaccurate and hallucination rate is extremely high. I didn’t have these issues with O1 Pro.
Yes! Feeling very vindicated to find this thread. o3-pro is good for the crawl-the-web, put together a detailed report with correct-ish math type of stuff, but terrible for coding so far in my experience. It just outputs stuff that doesn’t work for complex tasks. Has broken a nice chunk of my workflow. o1-pro seemed to “just work”.
For what it’s worth, you can still access it through the API… for now