Are caps going up or down next week?

GPT-4 currently has a cap of 25 messages every 3 hours. Expect lower cap next week, as we adjust for demand.

GPT-4 seems to think we are going to get even stricter caps next week, my gut is on this being a miss-phrasing of sorts.


While I’d love for higher caps, I suspect when they say “lower” they mean “lower”, especially considering that we had 100 every 4 hours yesterday, and are down to 25 every 3 now. It’s very possible that demand via the GPT-4 API is eating up compute on the server side, so they’re throttling the ChatGPT interface to make room for the API users. Just my conjecture, though. I’d welcome an official answer :slight_smile:


I find the cap of 25 messages per 3 hours to be too low for a paid service. As a developer, I need to use it often, and 25 messages just don’t cut it.

In my opinion, anything lower than 40 messages each 3 hours is not acceptable for a paid service like this. It’s frustrating.

Another issue I’ve noticed is that the system doesn’t display how many requests you’ve made in the last X hours. This is a malicious practice and makes it difficult for users to manage their usage effectively.

I hope that we can all voice our concerns about these issues and push for a reasonable change in the cap limit and transparency in usage monitoring.



The difference in quality between 3.5 and 4 is clear, which makes this change very disappointing. I get it; this is a sneak preview, but returning to 3.5 feels bad after you’ve used 4. It’s on par with using a car to travel 50 miles one day and then having to go back to riding a bike for that same 50 miles. Sure, they didn’t promise you the car, but it’s hard to get excited about riding a bike again. 3.5 is better than walking, but that doesn’t make this feel good.


I don’t know if it is me, but I find GPT-4 dumber on more complicated problems or even simple but less common ones such as PCR primer design.

Text is now:

GPT-4 currently has a cap of 25 messages every 3 hours. Expect significantly lower caps, as we adjust for demand.

(new word is “significantly”)

I am really not sure what to expect here… I am guessing that I should have expected 10 messages every 3 hours, but now that it is significant I should only expect a cap of 5 messages every 3 hours.

This is now SIGNIFICANTLY annoying me.

1 Like

This is just a JOKE for a paid service it is extremely restricted. Especially when often you have to ask the question several times in different ways for it to start getting towards the right answer. they should exclude repetative questions from the count!

I get people are upset, but I also understand where OpenAI are in this mess.

Yes it is a paid service, but the amount of compute needed to serve “unlimited” GPT4 calls is nowhere near the $20 USD a month we are paying.

But than the entire business model is flawed… they either have to work more on the engine to make it less thursty in compute or provide tiers of service and charges… i am happy to pay kore for more… until they get it to a point where its unlimited if that wouls even happen in the next few months/years.

I think they are way beyond the point of getting peoplento buy in to it they have a demand problem so need to restructure the offering…

Looks like they’ve reduced the maximum output for gpt-4 for me. It used to be able to output a lot, now it seems to be limited to 255 words. This means we’re being forced to use our cap faster, which is disappointing. 255 words is very low. It’s not enough even to give a proper answer. This is about 2x less than we previously had, which means that essentially we’re further capped to about 12.5 queries per 3 hours, since you have to say “continue” for the AI to finish the answer. Anyone else experiencing this?

Edit: Seems like they increased it again? Or they improved something because it’s been working well today, and it’s faster. Hopefully it stays that way and they don’t increase the limit again. I’m all for paying a bit more for gpt-4, but only if the product works as intended. And today it’s been very good.


I noticed this yesterday as well, was asking it to fiddle with a SQL query and it just had a truncated result.