Personally, I have both a Plus and a Teams account but in practice I only very rarely ran into message cap issues even with the 40-messages/3-hours with GPT-4.
With GPT-4o I’m not sure what I’d need to be doing in order to burn through 80-messages in 180-minutes.
Between the time writing my messages, adding relevant context, the model writing its message, reading the message, and taking action on the message, I just don’t see myself doing that non-stop for three-hours straight.
If the message caps are so onerous to you, which really seems like that’s the issue given what you’ve written, maybe switching to a Teams account or running a local model instead would be better for your use case.
I thought the “Teams” version also had message limits? When I looked I didn’t see any option or plan that got rid of message limits I think they just raised the cap a bit. If the “Teams” version gets rid of limits I’ll definitely look into that.
I too only rarely hit the cap as it is but when I do, boy is it inconvenient. A couple days ago I needed to run 1000+ lines of x,y,z coordinate rewrites that involved some heavy math and pasting it in big chunks was wrecking performance so I cut it up into smaller bites like “okay now do this one”. That was just 1 of the handful of tasks I was expecting to complete that day. I had just spent 20 on a month I didn’t even use then paid another 20 to reactivate specifically for that day’s tasks and within 1 hour 30 minutes I was locked out and told to use the other GPT (which had absolutely no context to the conversation or workflow) and had to start over but by the time it was rolling smooth I got locked out of that one too. I’m lucky that particular day’s work was only for a hobby. I’m horrified to think of that happening with anything important.
I haven’t looked into a local model yet, I’ll check that out too. Thanks, elmstedt.
I use 4o as an interactive study tool preparing for my medical licensing board exams. I rip through the limits as a paid user as well. It’s so damn useful for building a database of information and actively quizzing yourself on that information, expanding on said info (if need be). I just wish there was an option to purchase a higher tier without hitting limits. Ive loosely gathered that using the API version and purchasing tokens may allow for this… however I’m absolutely ignorant with regard to programming/ coding. If I had to guess what an API is, I would say… a government agency? Obviously not, but if this is a possibility would this option be way too far over my head to pursue? Would I even be able to achieve higher limits? Or am I doomed to the limitations. Thanks to anyone for insight.
There are no discount or promo codes. From the people who I have spoken with who use Teams, they would guess about 150-160 messages every 3 hours, but have never hit the limit.
I meant to quote reply the message above regarding the $108k for enterprise and jokingly say I was searching for a promo code. I did not actually expect it. But thank you for the information!
Nothing encourages users from trying out other AI tools more than Chatgpt unexpectedly crashing or displaying an error message in the middle of an important task.
Thanks… Looking closer at the message, I’m wondering if your post isn’t a bit unintentionally misleading?
Correct me if I am wrong, but it appears this message is specifically in response to a request for a Dall-E 3 generated image, yes?
This is a little different than the topic being discussed which is the 3-hour window message limit. Image generation has always had different limits than message exchanges, including a daily limit on the number of images which can be generated.
Here is a topic from 8-months ago discussing the daily limit for image generation in ChatGPT,
The rationale for the separate limit for image generation is likely that the Dall-E 3 model is much more computationally expensive than an average chat message generation.
So, for better or worse, this is where we are. Image generations have a daily cap and have since at least late last year.
I have a free and a plus account, and I was testing my free one, and it let me use two messages of custom GPTs before it said to wait another 5 hours for another 5 messages.
This is insane. I mean I also have a plus account and I was so close to cancelling it when the announcement of GPT-4o coming to the free tier. I’m so glad I didn’t. GPTs, and GPT-4 is so limited in the free tier it’s impossible to use.
Upgrade to Plus if you actually need to use GPT-4. Don’t rely on the free version.
Hey, we have some automated bots that flag posts so your post was probably automatically taken down, it is now back up meaning a moderator restored it.
Def not an expert here but here’s what I think/do. New tech takes a moment to catch up (yeah I know they probably have had time). The method they use to balance compute probably isn’t the best but that is cutting edge stuff. In the mean time while you are waiting if you need a quick response or query from open AI consider using playground and paying for your actual compute time with an API key. This is what us devs do. If you need actual commercial considerations this is your path. I like my simple gpt’s I’ve made too but when I’m not farting around and need real data this is the way. lol
Dear Jake Elmstedt, Since you are a leader, I guess you are a contributor from OpenAI. Therefore I would like to submit a suggestion:
What about having a counter of remaining prompts displayed at the top right of the page or anywhere visible so that we as users adapt the usage and do not get stuck in professional use?
When the quota is totally used, I appreciate knowing when I can come back to ChatGPT to continue my work. Here is a message I got not very helpful in continuing my job: “You have reached the maximum number of prompts you can send per day.”
I am a plus user.
I’m not affiliated with OpenAI in any way other than being a volunteer moderator on this forum.
What you’re suggesting is a good idea, and something several other users before you have suggested.
I’ve not heard any rumblings of this being implemented and my guess is that it won’t be.
Most users never hit the limits and I didn’t know if any service out there is particularly keen on highlighting the usage limits by implementing some type of counter like this.