just an information. In my opinion, o1 seems to be a regression to o1-preview, which, in my opinion, could make better analyses/understand text and the context of data without help. o1-preview was better at making decisions / solving tasks that were less concrete. So overall less autonomy / less intelligence.
I noticed this as well. The o1 model seems to be a little bit more lazy, as well as doesnāt seem as smart as it should be.
In some tasks Iāve given it, 4o seems to perform better than o1. o1-preview was the best model, and this one seems like theyāre running a less powerful versionā¦ maybe OpenAI moved o1-preview to o1 pro and is giving us a less powerful model
Iāve noticed the same, unfortunately. The reasoning process seems noticeably shorter, and the steps in problem-solving are more fragmented. This often leads to much weaker responses overall.
I really hope this is just a temporary phase, perhaps due to the increased demand, and that improvements are on the way!
Iād also love to see more user control over the reasoning processāperhaps options to enforce longer, more deliberate thinking in ChatGPT. For me, I often prioritize quality over speed; I wouldnāt mind waiting a few minutes if it meant getting well-thought-out, high-quality responses. Focusing too much on speed feels like the wrong direction for OpenAI, at least from my perspective.
Thatās what I have been noticing the differences when I use o1-preview from POE dot com and compare o1 from ChatGPT dot com. Out of curiosity and suprise, I was wondering āIs it me? or anyone else.ā And I found this thread. Thank you for sharing your observations!