This has happened to me quite a few times, is the limit reduced or something? and can i do something to improve this?
When using GPTs in ChatGPT plus, the use of AI models behind the scenes has much more expense, making multiple calls to APIs, retrieval, code interpreted.
OpenAI seems to have given their use an undocumented formula “counts for two”.
Piggybacking off of what @_j said,
I’ve notice a lesser limit myself while experimenting with some Custom GPTs. Can you verify if you were using the default GPT-4 model or a Custom GPT when you hit the limit after ~25 messages?
When i noticed a few times that i hit a limit around 25 messages on custom gpts, i switched to default gpt this time and still got just 24 message limit. But when i used it before 3 hrs were up, I could message again just after an hour.
I got the same message with less then 24 messages when using a GPT.
With the message there is a link that says: To give every Plus user a chance to try the model, we’re currently dynamically adjusting usage caps for GPT-4 as we learn more about demand and system performance.
This doesn’t sound nice to me… I understand having caps. But they should be communicated and not dynamically adjusted.
It sounds like they are bringing more Plus users by capping instead of increasing the capacity to deal with all new users with the previous cap.
This just happened to me.
Also, I wonder if network errors and other errors on the ChatGPT side (not mine) are consuming number of messages “used” by causing me to hit “regenerate”.
It keeps happening day after day.