The Great ChatGPT o1 pro Downgrade Nobody's Talking About

Are you implying that I’m asking for a discount? No, that’s not the case. I won’t be paying for the Pro version anymore due to my bad experience. I was simply analyzing the industry’s context.

In any case, it seems there’s no need to ask for a discount anymore :laughing:

2 Likes

im sure they must be limiting pro accounts that use too much. It’s gotten significantly worse in the last few days (had for about 2 weeks), long conversations it just refused to respond, currently, having major issues with remembering messages from two messages prior. very concerning

update: honestly, it’s worse then when gpt was free. it’s like trying to hold water in a holey bucket. so bad

2 Likes

Same exact thing happening with O3 now. Constantly errors out on anything even moderately complex or long, without explanation.

oh my god i tried o3 and it’s honestly fully retarded, like actual slop. completely unusable, like how? how could it be so bad? im mind blown.

unfortunately this is correct - o3 is horrendous and o3-pro is actually just as bad xD

it’s obvious by now that they’re dumbing down the models and marketing it to the users as “smarter”, because if they sold a big smart dildo as AGI people would still get excited and pay lots of money and tell their friends about Singularity at the tip of … anyways..

1 Like

The biggest challenge I see with o1 Pro Mode (now o3 Pro Mode) is how long reasoning takes. In some cases, it takes 20 minutes to reason to provide a response. The responses are quality, similar to how they have been since I signed up when Pro first became available. The biggest challenge I’ve seen is time waiting for responses. Also, when threads get fairly large, they get even slower and I have to start a new thread and re-train it with pertinent info.

I’m still 100% getting my money’s worth. However, I’ve wondered if I would be as successful with the $20 version. I will say I’ve tried it with Python scripts and I feel that Pro is much better. However, that comes with a trade off waiting for the model to reason and provide the response.

Honestly, I’d pay 5x the price of Pro Mode ($1,000/month) if the response times were similar to other models.

I cant even use o3pro for normal coding operations anymore. It’s not that it takes forever, it’s that it repeatedly says one thing and does another. over and over, for example, giving random/truncated versions of a file, then I explicitly tell it, and show it, how what it gave me is something like 100 lines less code (when we’re adding features), and it tells me it gave me a version that is for example 150lines larger(done this about 5 times in a row now). Something is severely wrong with o3pro and its blowing my mind that openai thinks it’s acceptable to rugpull all the o1 coders… Explanation please?

2 Likes

I’ve been using chat gpt for years. o3 pro is a rip off. Regular 4.0 with the regular subscription is better most of the time. I say give me 20 things on a list. Sometimes it’s 1, other times some. Other times it writes something completely off or it makes a whole new list. /quit they won’t give me money back either. o3 pro is not even 1 % of o1 pro

I don’t code but I know what you mean exactly. It happens to me so much I’m looking for something that actually works

yep. It did seem to get slightly better but it’s still horrible compared to o1/o1pro. Let me know if you find something better. I hear Grok 4 is going to be released