Performance o1 vs o1-preview

Hello,

just an information. In my opinion, o1 seems to be a regression to o1-preview, which, in my opinion, could make better analyses/understand text and the context of data without help. o1-preview was better at making decisions / solving tasks that were less concrete. So overall less autonomy / less intelligence.

Maybe that is something to optimize :wink:

10 Likes

I noticed this as well. The o1 model seems to be a little bit more lazy, as well as doesn’t seem as smart as it should be.

In some tasks I’ve given it, 4o seems to perform better than o1. o1-preview was the best model, and this one seems like they’re running a less powerful version… maybe OpenAI moved o1-preview to o1 pro and is giving us a less powerful model :frowning:

7 Likes

o1 is much worse than o1-preview. if it’s not a bug and they made o1 dumber on purpose I’d probably resign from subscription

8 Likes

Moin,

I’ve noticed the same, unfortunately. The reasoning process seems noticeably shorter, and the steps in problem-solving are more fragmented. This often leads to much weaker responses overall.

I really hope this is just a temporary phase, perhaps due to the increased demand, and that improvements are on the way!

I’d also love to see more user control over the reasoning process—perhaps options to enforce longer, more deliberate thinking in ChatGPT. For me, I often prioritize quality over speed; I wouldn’t mind waiting a few minutes if it meant getting well-thought-out, high-quality responses. Focusing too much on speed feels like the wrong direction for OpenAI, at least from my perspective.

1 Like

That’s what I have been noticing the differences when I use o1-preview from POE dot com and compare o1 from ChatGPT dot com. Out of curiosity and suprise, I was wondering ā€œIs it me? or anyone else.ā€ And I found this thread. Thank you for sharing your observations!

1 Like