There’s been a lot of discussion about GPT-4.5’s API performance, but one thing I’m wondering is whether its reasoning capabilities behave differently depending on how it’s accessed.
Some users have noted improvements in structured API responses, while others have reported more restrictions in deep recursion, multi-step logic, and iterative refinement compared to earlier models.
Does OpenAI apply different constraints to API responses vs. ChatGPT’s web interface? If so, what’s actually being optimized—and what’s being restricted? Would love to hear from those who’ve tested both."