ChatGPT - Do we get exactly 40 messages per 3 hours?

Is it my imagination, or was my recently restricted quota of 40 prompts per 3 hours (which I never agreed to) depleted before I submitted 40 messages today?

Recommendation:
OpenAI should provide a log of the submitted prompts, so customers have visibility of their ChatGPT transactions, like a bank statement. This seems like a reasonable request since we pay for the service and get cut-off based on the number of transactions. I want the ability to review my OpenAI usage to ensure any cutoffs by Open AI are valid.

OpenAI should provide full transparency of transactions. I want the equivalent of a bank statement, including my fees (deposits). costs (prompts) and revenue (based on upcoming revenue sharing) so it’s clear what I paid and what transactions depleted my entitlement.

If a specific GPT significantly depletes my OpenAI quota, then I need visibility of this.

Currently, it feels like OpenAI are dynamically revising their offering based on their ability to handle the number of user requests. Any such changes need to be clearly communicated to users, especially if they’ve signed up expecting more.

12 Likes

I want to add that users who suddenly find themselves unable to continue their conversations due to intransparencies in the GPT system will be turned off.
Not just from the GPT they had just been using but from using custom GPTs at all.

1 Like

Can you please explain this further. Not sure what you mean by “intransparencies”?

The number of messages used from the 40 messages per three hours for each single message send to the Custom GPT.
If I send one message and get one reply but whatever the GPT did counts as several replies.
Pretty much what you wrote but from a different perspective.

@vb If I hear you correctly, you’re saying:

If someone responds to a GPT once, that can potentially count as more than one of the users allocated 40 prompts per 3hrs. eg: if the GPT being used made 10 calls, that would count as 10 calls against the original users quota, without the GPT user having visibility of how many transactions the GPT requires?

Is this what you’re saying? And is this a guess or do you know for sure?

This is what users have been reporting. And comparing this to what is being billed from using a Assistant run it appears to be a reasonable assumption.

Ps. I am not a OpenAI staff member.

1 Like

If that’s correct, it highlights the need for users of GPTs / assistants to have visibility of the number of transactions associated, so they know what they are paying for.

Otherwise, GPT subscribers will be disappointed when they find their 40 prompts unexpectedly depleted due to running a couple of GPTs.

1 Like

Let me add this link reporting such an issue:

2 Likes

Oh no… Thanks. I guess that confirms my concerns.

1 Like

I have been able to see that, additionally, if in the generation of a response an error is generated, it still counts the message as “used”. It has already become usual for me to reach the limit of requests when resending one that previously threw an error.

Not trying to beat a dead horse here, but this is absolutely maddening. If they INSIST on a cap, they should be very open about it. Give users a visual indicator of how much use they have left instead of just letting us hit a brick wall when we’re finally getting steam. There is an ebb and flow in interacting with AI, I’ve found, at least for my purposes. When the limit pops up suddenly it makes me never want to come back because I’ve got to build the same energy I did before.

4 Likes

Hi, I just joined based on your thread ha ha, thanks bro because I am getting throttled like a moty foty all the time now as a paid subscriber. So I sign The Petition of your Thread!

This is troublesome as both a user and a developer, but I have developed a development flow to help relieve the frustration.

Use 3.5 for really basic programming help.
Test my GPTs with 4.0 of course.
When I hit the cap, go over and work on improving my API, using Postman to test.
When I get my GPT usage back, test my new and improved API.

3 Likes

I tried an experiment by submitting 40 simple prompts to see how many questions I could ask before reaching the cap.

I avoided using GPT’s to reduce the risk of one prompt being counted as more than one query.

Result: I was allowed exactly 40 prompts.

I wonder if:

a) Other users get the same experience, at different times of the day eg: How dynamic is the cap?

b) How much the 40 prompt cap can vary based on running GPTs? And if GPT’s do vary in their impact on user cap, then what GPT can deplete the users quota fastest?

Of course, my cap is now reached, so I can’t test further. :joy:

3 Likes

I think you’re missing the point. The question isn’t whether it is 40 simple questions within 3 hours, but how it counts errors and long requests, and when exactly these 3 hours start. Is it from the moment the first message is written, or is it a 3-hour difference between the last message and the 40 messages before it? This is a significant distinction.

Furthermore, there’s an issue with intentional errors that GPT seems to provide, especially when using Analysis (as in my case, over 90%) It appears to be deliberate errors that aren’t treated as questions at all but simply result in error responses. When you consider the cost savings in processing power on an economy of scale, it amounts to millions of euros per day, and that’s not a contract I signed up for.

Typically, when there’s a lack of transparency in anything in life, there’s always a reason behind it that’s unlikely to please the paying party.

1 Like

This was my first measurable test of the user caps. And I thought some users on this forum may find it interesting. Obviously, it’s best to start with the most basic tests, such as checking if we actually get 40 messages per 3 hours as stated. My initial test confirmed that 40 prompt cap.

My second test, found that listing questions inside an excel spreadsheet and uploading it resulted in many more questions being responded to, although I haven’t measured the limits of this.

This technology is evolving rapidly, but anyone suggesting “intentional” and “deliberate” errors should really present evidence. It’s widely recognised that LLM solutions are not foolproof, yet are improving significantly and the main AI LLM providers are open about this.

1 Like

I agree, but I don’t believe OpenAI will be willing to provide any evidence, much like their lack of transparency in the 40/3 issue.

Regarding the analysis errors, it’s quite straightforward. When you instruct GPT-4 to search the web for a specific topic (let’s take “cats vs. dogs in cartoons”) and request it to create a title, description, and article without capitalizing titles (i.e., avoid using Title Case), approximately 70% of the time, it will return an error and stop. Out of the remaining 30%, roughly 95% will still use Title Case for everything, not just the title. If you replicate this process on some other ChatGPT account using VPN and different geographic locations, the errors disappear. It’s important to note that these mistakes don’t seem to be specific to particular locations or accounts but rather depend on the primary usage pattern of the account. So if you’re not receiving the error, it’s likely because you use it primarily for something else. I’m not referring to GPTs or the API .

Also… It should add a new “message” anytime gets a failure response and you have to regenerate.

No, some category in this forum, solved plugin is not enabled.

3 Likes

The post is only in one category: GPT Builders.

“Feedback” is just a tag, as specifying a tag is set to mandatory. It’s not a sub-category. Removing the tag would make it request a tag.

I’m not too fussed about setting a solution as I’d rather others commented if they have other experiences.

1 Like