(20 requests per 3h?) Error with counting number of requests

I’ve sent a total of 20 requests on 2 different plugins. And it is saying that I’ve reached the 3-hour limit. Or it is 20 messages now? Stop treating your customers like this.


Custom GPTs are much more resource intensive than vanilla GPT-4. Users have been reporting hitting the usage cap at 20-messages while using custom GPTs for a couple of weeks now.

our job is to pay the amount they are saying and use it without worrying about that kind of bug. I hope they will fix the issue, otherwise, it is illegal to make this kind of limits

1 Like

It’s neither a bug, nor illegal, it’s just a usage cap.

Custom GPTs usage is dynamic and it currently set to 25 per 3 hours, though this is subject to change depending on load.



One thing that OpenAI does poorly is communication. Messages are misleading and are a problem for organizations.

1 Like

Thanks for the updated info.

Their lack of communication is puzzling. It doesn’t seem very hard to specify in the interface that there is a (lower) cap when using Custom GPTs; just like when one selects GPT-4, it says “Limit 40 messages / 3 hours”.

1 Like

So I didn’t use GPTs today, just ChatGPT 4. I sent 20 messages and reached the limit. I counted the messages. Why are they applying 20 messages per hour to normal ChatGPT?

1 Like

It was happened to me 2 days ago. I think some system error.

Disappointing when we pay 20/m (both me and my wife separately).


I tried bing/copilot and it only offered web search results…and when I wasn’t happy with that, it offered poems and songs…about an issue with nginx proxy manager. At least I don’t have any doubt about signing up for copilot lol.

I been using in last two days normal gpt , now it keeps making errors, its limiting its answers to a ridiculous lenght and just doesn’t do what its asked for repeating that it has limitations now and in the end all those erros messages count for cap and in the end it gives now this


Something went wrong. If this issue persists please contact us through our help center at help.openai.com.

Which doesn’t even tell you now how long to wait for , I waited 3 hours today to be able to use it again to receive ridiculous answers, waste of time …
I guess we just need to remove subscription as Cpt4 is not working anymore as before or as promised.


I have experienced the very same problem these last days that I have been using it a lot. It is incredibly lazy when it comes to writing code, and will not even refactor an entire function without claiming that it needed to add placeholders within it, as it "doesn’t have access to the original script "(the one I just gave it). It is always looking for excuses to make the user do the work instead of it.

Funnily enough, when I reached the limit with GPT4, I continued with GPT3.5, and it refactored the ENTIRE script of about 200 lines without complaints - where GPT4 could not manage one single function of about 15 lines without placeholder text.

I don’t know what they have done to it, but I hope GPT5 will not be based on this.


I confirmed again that it might have come from working back door in the past few days. There are many and many problems. I have only provided information about the problems I encountered, but the causes may be different.

I recommend that you adjust your usage behavior. If it’s a matter that doesn’t require continuity or the use of specific tools, 3.5 is more than capable and faster to answer. To the point of being irritating to the point of being angry. But remember, it’s free, and yes, 4 and GPTs will only give false answers due to hallucinations. There is also the problem of intentional errors in order to preserve the order that controls it. such as refusing to use outside information Even if the file doesn’t contain the required information. It can choose to be distorted to be relevant to the file it was inserted.

1 Like

The same for me in Spain, Does it depend on the country? I’m paying the same as other countries and I should receive the same amount of messages.

1 Like

I just got that error twice within one hour.

Counted 9 interactions after the first time delay (I didn’t have 40 inputs before I got the first delay, as I was flying and was hardly using ChatGPT). I’m not sure what’s going on, but 9 prompts within 3 hours seems ridiculous for $20 per month.

Also, has anyone noticed GPT4’s outputs getting worse? It regularly ignores obvious, specific instructions. It hallucinates all the time. When it does exactly what I want and I tell it to do the same thing, often it changes its output formatting for no reason.

Curious how Gemini compares, esp. with the new release pending. The Gemini demos were obviously misleading and Gemini Pro didn’t exceed GPT4 by a long shot. But now that Gemini has image generation and may allow more inputs than 9 PER HOUR, there may be a valid reason to switch.

Has anyone compared Gemini to GPT4?