o3-Pro Best-Practices and Best-Uses? Seeking Insights

I am not sure what category this belongs in because I’m referring to ChatGPT o3 prompting, as well as API usage, etc. — so I put it in Community for now.

I would love to hear people’s experiences so far iterating with o3-pro. I have only used it on my ChatGPT account a couple of times, and it iterates so slowly I haven’t found a sweet spot or clear use cases for o3 yet. If anyone has had the time to play around with o3-pro, please share some of your findings—API and/or ChatGPT, etc.

If you have found any resources or topics discussing this, specifically about o3-pro, feel free to include them.

Thank you,

Nick

1 Like

I am still in the midst of discovering it. I miss o1 Pro as i use it to review my legal defense or high stake documents. Its very good. Now that it is gone, I have started to use Gemini 2.5 Pro as a replacement.

o3 Pro seems to give more concise but very accurate answers as compared to Gemini 2.5 Pro but the way Gemini does is to write quite similarly to o1 Pro which helps in my overall workflow.

I don’t do coding or math. But i will continue to find more use cases. O3 is still highly relevant for me to spot gaps and determining accuracy as it is very very fast now.

1 Like

Great question!

The o-models have specific strengths and weaknesses. For example, o3 takes a long time “thinking” - even a simple prompt like “Hi!” can trigger thousands of reasoning tokens. That’s because the model cannot default to early conclusions.

This makes it best suited for complex tasks in areas like science, math, or coding. Simpler prompts often return less useful results, and the hallucination rate per token increases.

I’ve found this aligns well with the advice to give the model rich context—ideally through tool-assisted input which the model can pull when needed.

If I can’t provide enough complexity or context, it’s often better to use a smaller, faster model—and sometimes generate multiple responses instead.

4 Likes

I totally agree: I use o1-pro for academic research purpose. It was very good finding ways to improve my writing and reasoning and addressing the reviewers’ comments. Now o3-pro just give me concise points but it is not helpful for academic writing purpose.

1 Like

This becomes quickly an expensive endeavor meant apparently to consume all your time and deliver a model that works slower than a human.

“You messed up my carefully planned algorithm, respect my purposeful code and put it back, besides this new traceback” as your iterative input - another 10-20 minutes of your labor time consumed waiting for this extremely slow model (that also never comes because the UI in ChatGPT will display a full bar of “thinking” and never return without a refresh.)

You would want to avoid multi-turn interactions or a task that may not succeed at all cost. Or use the time to go to another provider and have multiple meaningful interactions that deliver a solution, then eat breakfast, while the thinking bar is still chugging along and never showing any “thinking” text with O3-Pro.

1 Like

Good day. So I have been using it quite a lot now and I have to say that o3 Pro does serve as intended. It’s technically amazing, not like o1 pro for its writing abilities but the fact that it is able to point out very strong key points that no other models are able to bring out. So what I do now is to use Gemini to do my writing by adding the pointers provided by o3 pro and or simply copy out whatever o3 pro gives into a word doc and build upon my work on 4.5 for the writing flow. Once done on Gemini and or 4.5, re-upload back to o3 pro chat for review. It has allowed me to sharpen it a lot.

Having o3 pro is like having a mini deep research each time, with actionable steps in its output, in a more concise manner being long winded. The more context you give, the higher the quality (it is already very good even with limited context but feed it more, I have always been surprised by what it can give especially on topics I am very familiar with and even so, surprised me even more).

1 Like

I see your point and I agree with how you are using o3 pro. I think o1 pro is more of a combination of o3 pro and 4.5, so coping whatever pointers provided by 03 pro and then send them to 4.5 for writing seems to be a way to go. Another thing is the financial costs: having both ChatGPT pro and Gemini 2.5 pro at the same time is very expensive.

Yes you hit it on the nail.

And with 4.5 leaving on 14 July (?), I a bit lost on what’s the replacement. GPT5 perhaps?

The challenge I had with o1 pro back then was the lack of tools, which is what makes o3 pro shine.

My personal take - Perhaps OpenAI intended o3 pro to remain as their main super reasoning model to complement the upcoming GPT5 which unifies 4.5 type of writing and to a certain degree, perhaps o4 mini high kind of reasoning and accuracy with probably other features yet to be released. This also accounts for the preliminary assessment given by early trials cited and the higher cost as compared to 4o.

This is seemingly the gap which I think 5 will cover hopefully in the very near future.

Would make a lot of people happy, I think! :wink:

4.5 has been very helpful with writing.

Thank you both for the response. I was quite surprised to hear you prefer o1 pro because in my experience, the lack of web-searching really handicapped the model and would only speak on things up to its training day. For example, a few months ago i asked o1 what it throught about deep research and it could not accept that it existed. I never saw any application for o1 but i will certainly give it another shot after reading this.

If you dont mind, I think it would be really helpful to hear an example of your full workflow with something like high stakes document. Im a really big fan of prompt chaining so if you would like to share your prompts and full workflow, I would love to hear it.

For example

How do you prompt o3 pro to extract this high-yield insights?

Thanks again.

My initial impression was o3 pro was that - hey, this is just like o3 so what’s the difference? I liked o1 pro as I grew accustomed to the way it wrote my papers. It was pretty much similar to Gemini pro 2.5 but yes, the lacking in web search tools crippled it, coupled by the fact that I am unable to upload files and documents.

However o3 pro edges tremendously with its tools, providing very deep citations of key aspects of legislations which I didn’t consider nor was able to get them when I ran the same prompt on Gemini 2.5 pro (I do peer reviews between the 2 models).

For my work, I would have it run through my papers crafted (after I have cleared it section by section), have it tease of gaps, counter angles, remnants which I may not have seen and distill it to the crux of what the issue is. It produces very accurate outputs (you need to ask it to list out the citations, with dates and state the date reference so if does not just pull late references as legislation do get updated). In addition, it makes you consider CTA steps, provides its justifications why and keeps reminding you to do it even if you “avoid it” in subsequent prompts.

Memory, live web search, basically its tools does make it superior to o1 pro any time. The longer waiting time for better answers is what I need and hence have no issues waiting for it.

I agree with @vb and just want to add that I’ve found o3 pro works best when you give as much high-quality context from the outset explicitly.

1 Like

I miss o1-pro as well. 03-pro is worse in every aspect in terms of accuracy, speed, quality of reasoning, quality of presentation. It’s very frustrating.

3 Likes