In December, when the reasoning model o1 was released, I was over the moon with happiness because this model was extremely obedient, highly original in writing texts, wrote in natural language, and so on. In short, I knew exactly what I was paying $20 for.
But after the latest update and the release of o3-mini (which, by the way, is worse in text quality than o1), o1 has turned into a complete disaster.
There are often moments when it just thinks for 3-4 seconds about a request that it would have previously taken about 30-40 seconds to process, and then it gives me a response that is so bad and of such low quality that even 4o would have done better.
I don’t understand why the company isn’t commenting on this in any way.
It’s just awful to pay $20 and get this. Please, OpenAI, just give me those 50 generations per week—okay, fine, I’ll live with the fact that it’s only 50—but at least make them high-quality!!! (Because 50 generations per week is very little!)