Weird extreme Cap for GPT 4o as a Plus Member

Hey Guys

I wanted to ask you if any ChatGPT Plus Members also noticed a decline in the amout of Chats you can have with GPT4o.

I used to be able to chat infinite with 4o.

For those who don’t know Plus is the Paid Membership and every since the new Membership (PRO) came up I have noticed a massive decline in the amount of Chats I can have with GPT 4o.

I get the limitation on model like o1 but 4o? Seriously?

Which I find kinda disrespectfull since I am paying quite a lot every Month.

Has anyone also noticed this happening?

2 Likes

One day of o3 was good next day was bad tonight was good again.

I think it has to do with my prompts.

In first night I explained from details slowly going forward step by step until it reached a good point

The next day I gave it a huge load of stuff because the result of the day before was so impressive. And even gave up and learned background story stuff of the task.

Last night prepared with fresh knowledge I started to ask for smaller portions and explained more detailled. Finished four times what I was able in first programming session…

gpt-4o works a lot better this way too.

To be honest, I’ve noticed that, for the past few days, I’ve suddenly started reaching the message limit for 4o. Usually, I end up having to wait anywhere from 20 minutes to 2 hours, and it seems completely random.

Some days, I can chat indefinitely, while on others, I hit the limit after around 20 messages and have to either switch to another model or wait for the time they specify

Yeah same for me,

I never hit the cap of 4o in the past, but in these past few days, I’ve gotten the cap quite often, which is annoying.

I think it has something to do, with openAI releasing the “Pro” membership, which has a absurd price tag.

Same! I’ve never encountered limits on 4o. That led me to search about it, and I’m glad I found your post!

I only subscribed to the paid membership a few months ago, and I’ve never experienced limits (except with o1 or image generator models). I’ve never questioned what caps there may be per hour or day, especially for 4o.

I’m finding that 4o is failing so often now with terrible suggestions and “forgetting” that I’m starting to waste time using it and waste prompts more than ever. Is anyone else noticing that?

If the decline in quality for the 4o model requires us to use more prompts than before, is there any way to push back on OpenAI for that?

I would hate to think their inefficiency/quality loss is intended to motivate us to upgrade to a new, higher-tiered version. OpenAI should know that won’t work for their subscribers.

I avoid 4o like the plague that it has become.

Hey everyone!

Make sure to open a new chat for each new request.
This will get you ratelimited less often.
For each message you post in the same chat, all previous messages get prepended before the new message so the AI has context of the conversation.
This context adds to the context-length.
The longer the context-length, the more input tokens you use.
So if you send 100 small messages, they will add up “behind the scenes” without you knowing → You end up with a “small message” that in reality is hundreds of messages long, eating up input tokens that you might not need, ratelimiting you faster.

TLDR:

Make a new chat for each thing you ask, it will ratelimit you less.

Cheers! :hugs:

You are not mixing up ChatGPT and API? Do you?