Recent update reduced o3-mini quality

Hi OpenAI Team,

I really love your new reasoning models and have been using them extensively for my work and school. But recently, I noticed that the o3-mini has gotten ‘lazier’ and wouldn’t actually try to do the given tasks properly and fully, even for simple math and coding tasks.

For example, I’ve been using o3‑mini to walk me through my math lecture notes in detail. Until recently it did an amazing job, but now it only provides formulas with minimal explanation and no intuition, which is what I’m using it for. Oddly, it also tries to show a lot of tables even though that’s not what you want for learning math.

This is only one of my use cases that has been affected. There are also simple coding tasks that should be done easily with o1-mini that it refuses to do properly.

Have you recently updated the model? I’m sharing this as a feedback and also to see if others have noticed the same behavior. I really love the o3-mini so far and hope that it keeps being awesome like before or even better! Thanks in advance for your help.