Feedback on GPT-4 limits in ChatGPT Plus

The GPT-4 model was limited to 100 messages in 4 hours. Now it seems to be limited to 50 for the same time. I think it should be more transparent what the limit is, before you buy ChatGPT Plus. I heard of many people that just bought ChatGPT Plus for GPT-4. The only official statement that I could find is that there is a “dynamic limit”. Maybe the current limit should be shown somewhere?

I also think that it should be (tokens / time) instead of (messages / time). When a user submits a long text and asks many small questions about it, it’s certainly less computing power, than when a user sends many huge texts to summarize, or generating long content with it.

Are there any estimates, how fast the servers will be scaled up, or the time it takes to make the model more efficient?

10 Likes

I agree. Looking for a place to get a refund. I didn’t buy a capped service. You never stipulated that on the sales page… but by all means, treat the paying customers worse than the free customers. sustainable business model you have there.

Agree. Didnt buy a capped service, no visible mention of the cap before purchase. also: speed not improved. please refund asap.

thx

Agree with above. I’ll stop paying for the service soon.

I am using ChatGPT + GPT-4 to build w/ code (I not a professional programmer; I am an indie consultant w/ an MBA + a decade of startup / PM experience who does project-/results-based work for clients), and it has been an insane boon … huge boost in productivity, and I’m solving real problems, including saving myself >$200 in just a few minutes: Christian Ulstrup on LinkedIn: #python #ai #whisper #openai

I am now consistently running into the GPT-4 cap, which is driving me nuts and preventing me from making additional progress.

Also, the context window isn’t long enough, and GPT keeps “forgetting” critical information as I ask it for help in updating a Python app I’m working on.

I would HAPPILY pay more for more GPT-4 (and a faster version if that’s even possible) with a bigger context window. Please consider making this possible, OpenAI team!