I’ve worked on many different projects, and the same core issue appears in every one of them. This issue is present across all models, but it’s especially pronounced in the O1 Pro—even though it initially follows instructions more precisely.
For example, I tried creating a meal planner for myself. Given O1 Pro’s raw computational power, I initially thought it could handle calculations for vitamins, minerals, macros, and various other factors. However, it consistently cuts corners. No matter how detailed the instructions, it always finds ways to simplify or omit details. Even when I reduced the project’s complexity, the issue persisted to some degree.
This doesn’t happen when handling simple queries, but as soon as the task becomes even slightly complex, O1 Pro starts cutting corners. I understand this is an intentional feature to conserve computing power, but it renders O1 Pro ineffective for precise tasks.
I’ve been working with GPT since its launch, and I’m 100% certain that unless this “corner-cutting” behavior—meant to optimize resources—is either removed from certain models or adapted to detect precision tasks and switch to a more accurate mode, GPT’s real-world usefulness will remain limited.