i tried to use o1-preview today and it didn’t go through the reasoning steps at all and just responded in the exact same way gpt-4o would have responded. i got notified that i have 25 prompts left on o1-preview, but none of the recent prompts i’ve tried have actually been using the o1-preview.
reasons i know this is happening:
it responded immediately rather than taking any time at all
it did not list the broad steps it took in the drop down as it ususally does
it responded using the customizations and memories it had (which o1-preview hasn’t done before)
it’s responses are terse and borderline useless to questions it would normally have considered multiple angles on before responding.
oh also - i can generate alternate responses using gpt-4o and compare them and they are nearly identical even tho one alternate will say o1 and the other will say 4o
same here. the answers from o1-preview, o1-mini, 4o, and 4o-mini are all same on website. they are different in android app. But I think it’s a web browser related issue, since when I use another pc, the issue is gone.
looks like the phone app works consistently to cause o1 to respond, but the website is what’s having the issues. which seems really strange… maybe the request sent with which model to use is broken in the web and correct in the app?