Are caps going up or down next week?

GPT-4 currently has a cap of 25 messages every 3 hours. Expect lower cap next week, as we adjust for demand.

GPT-4 seems to think we are going to get even stricter caps next week, my gut is on this being a miss-phrasing of sorts.

5 Likes

While I’d love for higher caps, I suspect when they say “lower” they mean “lower”, especially considering that we had 100 every 4 hours yesterday, and are down to 25 every 3 now. It’s very possible that demand via the GPT-4 API is eating up compute on the server side, so they’re throttling the ChatGPT interface to make room for the API users. Just my conjecture, though. I’d welcome an official answer :slight_smile:

3 Likes

I find the cap of 25 messages per 3 hours to be too low for a paid service. As a developer, I need to use it often, and 25 messages just don’t cut it.

In my opinion, anything lower than 40 messages each 3 hours is not acceptable for a paid service like this. It’s frustrating.

Another issue I’ve noticed is that the system doesn’t display how many requests you’ve made in the last X hours. This is a malicious practice and makes it difficult for users to manage their usage effectively.

I hope that we can all voice our concerns about these issues and push for a reasonable change in the cap limit and transparency in usage monitoring.

Thanks

10 Likes

The difference in quality between 3.5 and 4 is clear, which makes this change very disappointing. I get it; this is a sneak preview, but returning to 3.5 feels bad after you’ve used 4. It’s on par with using a car to travel 50 miles one day and then having to go back to riding a bike for that same 50 miles. Sure, they didn’t promise you the car, but it’s hard to get excited about riding a bike again. 3.5 is better than walking, but that doesn’t make this feel good.

4 Likes

I don’t know if it is me, but I find GPT-4 dumber on more complicated problems or even simple but less common ones such as PCR primer design.

Text is now:

GPT-4 currently has a cap of 25 messages every 3 hours. Expect significantly lower caps, as we adjust for demand.

(new word is “significantly”)

I am really not sure what to expect here… I am guessing that I should have expected 10 messages every 3 hours, but now that it is significant I should only expect a cap of 5 messages every 3 hours.

This is now SIGNIFICANTLY annoying me.

1 Like

This is just a JOKE for a paid service it is extremely restricted. Especially when often you have to ask the question several times in different ways for it to start getting towards the right answer. they should exclude repetative questions from the count!

2 Likes

I get people are upset, but I also understand where OpenAI are in this mess.

Yes it is a paid service, but the amount of compute needed to serve “unlimited” GPT4 calls is nowhere near the $20 USD a month we are paying.

1 Like

But than the entire business model is flawed… they either have to work more on the engine to make it less thursty in compute or provide tiers of service and charges… i am happy to pay kore for more… until they get it to a point where its unlimited if that wouls even happen in the next few months/years.

I think they are way beyond the point of getting peoplento buy in to it they have a demand problem so need to restructure the offering…

Looks like they’ve reduced the maximum output for gpt-4 for me. It used to be able to output a lot, now it seems to be limited to 255 words. This means we’re being forced to use our cap faster, which is disappointing. 255 words is very low. It’s not enough even to give a proper answer. This is about 2x less than we previously had, which means that essentially we’re further capped to about 12.5 queries per 3 hours, since you have to say “continue” for the AI to finish the answer. Anyone else experiencing this?

Edit: Seems like they increased it again? Or they improved something because it’s been working well today, and it’s faster. Hopefully it stays that way and they don’t increase the limit again. I’m all for paying a bit more for gpt-4, but only if the product works as intended. And today it’s been very good.

2 Likes

I noticed this yesterday as well, was asking it to fiddle with a SQL query and it just had a truncated result.

The question I have is if they keep lowering the caps to make room for more users are they proportionately lowering the monthly costs we’ve paid? since we proportionately are restricted to less usage so they can sell more subscriptions? That’s like buying a Microsoft 365 and then after Microsoft saying, yea, I know you’ve paid for a subscription to Word but now you can only use it this much on these days for this long because we sold too many products and now we can’t keep up with the supply chain. Then don’t sell the product. Are we getting refunds? You can’t set the rules of the game, then change the rules of the game after you’ve started the game. There are laws about this when you sell a product or service.

1 Like

Is GPT4 chat on the Playground restricted to 25 per 3 hours? Since its paying per token I bet its not restricted.

So for some people, maybe playground chats are better than the $20 per month.
Well, I am happy so far. What is twenty bucks in exchange for what we’re gettingl…

These guys aren’t great at monetisation. If the USD20 only pays for compute for 25 GPT4 calls per 3 hours, don’t just cut us off - let us power up for an extra payment. When you’re on a role, you’re on a role. GPT3.5 can’t cut it to finish off these tasks, and there being no option at all to continue is annoying AF. If there’s a fee at that point, make it clear why and people would pay it if they are getting sufficient value.

Since I just hit my limit of ChatGPT4 calls, I asked GPT3.5 for some suggestions how OpenAI could let me continue to access GPT4 without costing them money: seems they could ask their own tool and they’d have the answer:

"One way to achieve this could be by offering a subscription-based model that includes a fixed number of calls per month for a set price of $20, with additional calls charged at a higher rate. Here are some ways to implement this:

  1. Tiered pricing: The LLM provider could offer different tiers of subscription plans, each with a different number of included calls and different rates for additional calls. For example, a basic plan could offer 10 calls for $20 per month, with additional calls priced at $3 each. A higher tier plan could offer 20 calls for $30 per month, with additional calls priced at $2 each. This way, users can choose a plan that best fits their needs and budget.
  2. Roll-over calls: Another approach is to allow unused calls from the monthly allowance to roll over to the following month, so users don’t lose any calls they have paid for. This would incentivize users to stay subscribed and would also give them the flexibility to use their calls when they need them without worrying about losing money.
  3. Usage-based billing: The LLM provider could also consider implementing usage-based billing, where users are charged based on the actual time they spend on the call. This would ensure that users only pay for what they use and would remove the pressure to use up all their included calls each month. This approach would require careful tracking of call durations and would require some technical infrastructure to implement, but it could offer a fair and flexible pricing model.
  4. Package deals: Finally, the LLM provider could consider offering package deals for users who need a larger number of calls, such as a bundle of 50 calls for a discounted price of $80. This would give users more flexibility and would allow them to save money if they need to make a high volume of calls."

Who knows what GPT4 would have come up with - I’ll let you know at 9:37 :slight_smile:

1 Like

Interesting idea. But those pricing targets are a bit steep. I agree with your idea in general, there should be some flexibility, but those prices seem unrealistic. You should’ve asked gpt-4 for advice, not that genius gpt3.5 lol. Example:

For example, a basic plan could offer 10 calls for $20 per month…

We already have 25 calls per 3 hours, why are you giving them worse ideas, aussie?

Personally, I’d like as much freedom as possible, to be able to use the service when I want, how I want, based on how much I paid, without making it unusable based on what I need it for. There “could be” several plans, but it may be difficult to create that based on OpeanAI mission.

Instead, this would work for me: there should be about 5 to 10 of these criteria at least that can be used to tune the service based on our needs at any time, so we can choose how we use it. The more we increase, the faster we spend the available resource we paid for. This would work the best, as it gives you freedom (#1 most important) to tune it to your own needs, but the more powerful you want, the faster you spend paid resources. The biggest problem is determining what’s the default. I think this, right now, can be the default, or it could be determined based on your needs.

Could someone kindly explain to me how the “X calls per 3 hours” model works? Specifically, do I have to wait the full 3 hours after reaching the limit or are the 3 hours counted from the first call I initiate after a longer break in usage? Thanks in advance for a clarification.

As a student who pays for ChatGPT and constantly uses it to help me with my self-guided learning the cap is way to small especially when it comes to topics which I am not very confident in as halfway through I’d reach the cap and have to wait several hours before I can continue my learning.

Any news about cap changes? Will it increase?

1 Like

this needs to happen higher caps and 6 times what the ai can write to you and what you can send to the ai as well!