Yes, constantly, since I started playing with GPTs. It is very frustrating! Before that, rarely, unless I was heavily using ChatGPT that day.
You can disable the history entirely, by selecting this option in your settings, note that it will disable and remove all of your message sidebar history.
I don’t know how your reply and my answer got moved to this thread Oh, well, hopefully your issue is sorted regardless.
It is more likely to come from internal operations. Because it happened to me yesterday too. We may have to wait a while for OpenAI to do its job, with the impact of increasing usage. Setting message limits is important in resolving this issue as quickly as possible. But I found that there are other ways to use it with similar efficiency (I don’t want to specify because it may indicate a vulnerability in the system and it is not a solution).
Data from observation of own use and AI interrogation (although not reliable and there are still some doubts But it matches many tests)
The time count is counted from the first message. And the number will be reset every 3 hours by counting messages that have been sent for processing to the AI, not counting messages based on the number of times sent. Therefore, with normal use, there is not much problem in the number of times sending messages. Or we will find that the waiting time will not be very long.
What types of messages are counted? If counting by sending data to be processed by AI is true, the general statement that is Greetings and nonsense chatting will not be counted at all, and receiving photos and files should also fall under these conditions. But if you receive the picture Files and then analyzed data may also be counted. For example, receiving a picture and telling small details based on behavior may not be counted. But it will be counted if ordered to analyze what is in the image. But one point that was noticed is that the AI that provided the information said that using the browser was not the use of AI in processing, there would not be any counting.
But in the end, this one is 1000% because it happened to myself yesterday. That is, refreshing the message will also be counted, even though The chatbot will be the one to refresh. (During this time, many people often run out for this reason.)
Plus, regarding the AI refresh, most users will understand that it’s a problem from the system, but not all. There will be a manual refresh request from the AI to improve the correct information being answered. Or if we include questions that cause conflicting processing of information from which decisions cannot be easily made.
Using Builder, messages may not be counted. Because there used to be a request to refresh almost 30 times at once (I let it refresh to see if it could end by itself or not but it was a waste of time so I gave up) but I can still use other tasks as usual.
This information has not been subjected to expert testing for technical verification. And it’s my opinion. There may be mistakes.
Thats the answer I was looking for! Are you sure is when the first message every time is sent?
I am also hitting the limits while using custom GPTs, even though I’m no where near the 50 messages in three hours. I can use a regular GPT4 prompt, but when I try to access one of my GPTs I get “You’ve reached the current usage cap for GPT-4, please try again after 12:23 AM.”
Is there a place to report bugs?
Can you tell me the details? like the previous usage What was your GPT doing when the warning message occurred? Or the main type of use of GPT and have other GPT screens open or not? and how long is the waiting time? Which topic should I correct?
I made a GPT. I was testing it and probably used about 10 prompts within 30 minutes or so. And then it refused to allow me to use the GPT anymore. Whenever I did, got the message “You’ve reached the current usage cap for GPT-4, please try again after 12:23 AM.”
I immediately when to chat.openai.com/?model=gpt-4
and tried to use it. It worked normally, suggesting I hadn’t used 50 messages in 3 hours.
It is still working like this now. When I go directly to GPT4, I can use it. When I try to use a GPT, I get the error.
iIf it still happens frequently In addition to checking the usage or settings of the GPT used, if normal It is better to inform the team. I’ve searched for similar issues, haven’t found anyone else yet. However, this problem should be reported quickly. Because during this time they may be slow in responding to you. My email has been 2-3 days and there has been no response at all. I usually answer very quickly. In the meantime, I’ll help you look at the information.
I’d do that only if the plan also included larger input as well as larger output. Admittedly I am still new to ChatGPT so maybe these limits mean nothing to anyone else.
Where can one report issues. (Plus, reread the original post. I believe the original poster is running into this same issue.
from your device In the lower left menu where you set up ID and chat, there will be help & faq use it to faq page. Then in the lower right corner there will be an icon to access the help section.
Aside from the OpenAI corporate political fight, this technical problem would most likely jeopardize a successful launch of the GPT Store at the end of the month.
I believe the political issue would be cleared soon, but I hope the DevOps guy has seen this message by now to take a look at this technical problem.
I can use a GPT only for just a few minutes (about 30 prompts) before hitting this speed bump. The serious business of our GPT hasn’t even begun!
Now it is about 10:15 PM. Do you really believe someone would wait for more than 1.5 hours to use your GPT again?!
Yesterday I exceeded the deadline for using My GPT. I was reminded of your mention that you can use GPT4 as usual. I tested it out and it works. I think this method is useful. If the use is managed systematically
Has your problem been resolved yet?
It is more likely to come from internal operations. Because it happened to me yesterday too. We may have to wait a while for OpenAI to do its job, with the impact of increasing usage. Setting message limits is important in resolving this issue as quickly as possible. But I found that there are other ways to use it with similar efficiency (I don’t want to specify because it may indicate a vulnerability in the system and it is not a solution).
I want to believe it is internal. I just checked and I can continue to use the normal GPT, but not my custom GPT’s. OpenAI needs to fix this ASAP. I pay for a Plus account to avoid these types of things.
I’m hitting my usage cap over and over again. I get maybe 1.5 hours max of use before getting dumped each time. I need to keep getting summaries – otherwise my GPT dumbs out on me. I’ve updated my paid amount to $40. I’m hopeful this will allow me more work time before hitting that not-so-magic red sign shown above.
My usage cap is down to 40 messages per 3 hours and then it stops me at like 29 messages. This is in the ChatGPT plus. I really don’t get it. It’s every day I’m hitting the cap and I keep hearing that it’s hard to reach the limit.
So in general, I have experienced this more frequently in the past week or so. Having used ChatGPT steadily for a year now my experience has been the following.
- There are “flurries” of this that happen every few months and can last for a week or two. In total I’d say I’ve spent about 5% of my time hitting the cap and 95% not. So it will go away.
- It is likely due to the new GPT functionality and how the Knowledge and Actions are interacting with the back end. Perhaps one response that uses Knowledge counts as “2” and a response that uses Knowledge and Actions counts as “3”. That may be off-base but it feels right given my experience.
- Switching to GPT-4 makes you less likely to hit the cap. Indeed there have been times where I hit the cap with a GPT and it still allowed me to start a new thread with GPT-4 (vanilla ChatGPT)
- Version 3.5 is actually quite powerful and not capped at all. You can’t do everything you can with a GPT or GPT-4 but there is a lot you can do, and fast, with it. If you batch some other tasks and interact with 3.5 for a bit that can be a productive way to pass the downtime.
It can be frustrating for sure, but it will go away before you know it.