01 and 01 Mini Reduced Intelligence - Post 12/11/24 Crash

Ever since yesterday’s crash - 01 and 01-mini have lost a significant amount of intelligence and organization of complex responses. I hope this is not a long term problem as the intelligence level is SIGNIFICANTLY lower.

1 Like

they not only downgrade $20 model, they even downgrade my $200 o1 pro. You are not alone

4 Likes

Desperate Plea: Please Don’t Downgrade My O1 Pro Model – I Need It for My Research

Dear OpenAI Team,

I am a student, and I am currently working on a critical research project that requires a high level of intelligence and accuracy—exactly what O1 Pro was supposed to provide. I’ve invested my entire budget, every bit of savings, to pay for the $200 O1 Pro plan, and I’m at my breaking point.

Please, I am begging you—don’t downgrade my model. I’ve already posted here in an attempt to get some attention, but beyond that, I feel completely powerless. I can’t afford to lose the tool that is key to my work. Without it, I’m stuck, and my research will suffer terribly.

I understand that OpenAI is working on various things, but I don’t know what else to do. Please don’t remove my post or silence my concerns. I am desperate. I just need the model to remain as it was when I signed up—intelligent, reliable, and able to support the complex tasks I’m facing.

I know this might seem like a small issue in the grand scheme of things, but for me, it’s everything right now. I humbly ask that you don’t ignore this. I’m pleading for your understanding.

Thank you for taking the time to read this. I can only hope for a solution that doesn’t impact my academic future.

Sincerely,

A student from taiwan national university

1 Like

I understand your position and respect your sentiment.
But let me point out that one part of scientific work, and work in general, is managing risks.
I think it makes sense to work with the latest AI in science, even if there is a risk regarding the availability.
The term desperation caught my attention. A change in business operations of a private company in a different country may be able to kill your project - but the project should have a strategy to degrade, or die, gracefully.

Ok, I think this is not too helpful…
What is helpful for me in this situation is that I suffered through seeing the intelligence of a model degrade so long ago that I now accept it as a fact of life…
Seeing GPT-3 having dementia was sad…

Impact your academic future? This tool literally just came out and it’s this crucial to your success? It doesn’t sound like you should be given any credentials to be doing anything quite yet.

Yup had to go their competitors. Chat is like the worst one now.

Quote:“…Intelligent, reliable, and able to support the complex task.”

Sorry, but I have to bring you back to reality. As far as I know, no AI is truly “intelligent” or capable of handling complex tasks. In fact, I see a lot of boasting from OPENAI about all the supposed technology they integrate, but let me tell you, their machine often struggles with even simple organizing tasks. It frequently confuses things, misses crucial parts, or even invents details, leaving you thinking, ‘What the hell?!’ So, when it comes down to it, ChatGPT is 0% reliable, which essentially makes it a burden since you still have to double-check your work, wasting time.

You would think that these kinds of things would improve whenever the models improve, but here comes the joke. Surprisingly, Scrapyard 3.5 was very good at processing jobs and organizing, for example, up to about 50 lines before it became ‘drunk.’ Then came Vacuum Cleaner 4, which could do about the same but charged you more. We didn’t give this direct attention because now we had uploads, images, and all that stuff. Things went WAYyyy down when GarbageCan 4o came out, including Mini Garbage. Suddenly, outputs became much smaller, and processing jobs started becoming ‘drunk’ after just 25 lines. But people are easy to blind when you double their cap.

Now, the latest piece of junk from OPENAI, Charlatan o1, is supposedly ‘reasoning,’ and just because of the hocus-pocus taking a couple of seconds, you might think it is actually ‘intelligent.’ Actually, it is intelligent, but not in the way you think. It’s not good at all at the most simple daily jobs (like the other scrap parts), but it’s definitely way better at restricting you than ever before.

Knowing all this, would I advise you not to use the junk from OPENAI? No, on the contrary, it could be pretty useful if you know how to create good prompts and don’t end up feeding it too much. I could spend an extra 20 dollars for those beggars in Silicon Valley, hoping that one day they will improve their hobby project without useless restrictions. Maybe they will also have better interactions with their users, since that is what a community is all about. Would I pay 200 dollars for this malfunctioning machine with loose screws? Not in a million lifetimes.

In my opinion, AI bots can be handy as assisting tools, but it’s better to rely on your own brain for intelligence when it’s available.

Cheers and Merry X-Mas to everyone.

I hate to do it, but I agree with this.

I felt a significant drop of competence in o1 pro in the last, maybe, 2 days?

I have no eval to back it up, but if there were any form of model update, I’d look into rolling it back.

It could also be that I got used to o1 pro and decided to get more ambitious with it, causing some sort of cognitive bias.

I even went back to using 4o instead of o1 pro, since 4o took less time and seemed more accurate.

maybe it has to do with this? o1 pro is high latency and maybe it was switched to medium latency because of how much they were costing? I have no idea what changed or if its all in our heads.

I’ll share this: A PhD candidate came to her professor asking for help. The professor was very busy and so was I, but since he knows I’m really good with technology, he asked me to do it - I informed how busy I was and asked if I could use AI and he gave me permission to do so.

In 15 minutes, I took a picture of the 20+ pages given to me, used a vision model to digitize every page and chart and used o1-preview to review the paper.

Once the critique and review was done I gave it to the professor. He was very surprised. The review was spot on. It would’ve taken him a few hours to do so, his review and critiques would’ve been better, but not by too much.

A few weeks later he was able to review the paper. The day that he finished his review he asked me to help him get a plus account for ChatGPT.

Moral of the story, sure, we are not there yet, but knowing how to use the newer models can certainly help quite a lot. I imagine o3, o4 and o5 will get even closer to having real world applications on the research and post graduate world.

The thing is that you paid 200 dollar a month for a pro account for which you should get the quality as you expected. And if that is not possible for some reason, OpenAI should at least give you a clear reason of what went wrong.

Look, I don’t mind things are not perfectly working, since in the end it is technology which is ever improving.
What bothers me is that OpenAI forgets that WE the users are supporting their project keeping them alive. So the least what they could do in respect of their users was to provide some honest communication about it.

There is no excuse for not doing so in my opinion.
Good luck with your project further. I hope it will work out for you.

its back to working just fine now, at least for me it is, no idea why it got degraded… but on their status.openai.com it did say something was going on:

and it does match the timing I said so… it only said voice mode, but I got the degradation on the text models (or at least the perception on it, as maybe it is just a case of using different areas of study)