The limit is way too low, especially if you have a plan for your whole devops team of >10 people who use it simultaneously. They should change it so that 40 GPT4 messages per 3 hours are free and after that you have to pay for each message e.g. $0.05 per message
You can use the API and pay per token if you want this type of pricing scheme. There are plenty of options (some open-source) for a UI and Assistants (although still completely lacking and unsuitable for production) conceptually comes with all the same features as ChatGPT.
I doubt even charging $0.05 per message would suffice for OpenAI. They throw tokens like confetti, especially when using Tools.
For reference, JUST the system instructions (~2,200 tokens) for GPT Builder would cost us through the API $0.02. Thatās not including the user message, or the functionalities and output.
Is this true? Where is that info from? Yesterday evening I was able to use 40 messages on a personal GPT only. Today at 12.30 UTC the cap was at 20 messages again⦠I dont think its about personal GPTs.
If they also reduced ChatGPT4 to 20 and havenāt updated the text⦠Thatās messed up. It could be that thereās a soft limit triggered from sending a lot of messages in a short time.
To be fair thereās no āofficialā announcement for this (there never is), but from my experience thereās a forum consensus that Custom GPTs have a halved limit.
Maybe true, as I used my limit in about 20 minutes. But then again, yesterday evening I had no problem at all in using 40 messages in 45 minutesā¦
I cannot confirm that as I sometimes have 40 messages when using a personal GPT.
There must be a reason thereās no indication of current messages used / cap limit. The reports have varied as well, it could be the amount of tools being used. Weāre left to speculate from anecdotes. Iām under the impression that this rate is dynamic and being constantly adjusted.
So, anecdotally I have also noticed that my CustomGPTs (which donāt use any tools) have a much lower rate limit.
Iāve been using the GPT-4 since last March. Right now, the chat limit is not 40. Iām getting cut off well before that. So thereās something āwrongā with the ādynamicā limit, and itās changed notably in the past week or so. I really would like you all to revisit what is going on with the limit - and whether 40 is too constraining. Iād be willing to pay for more, but the team-based option having to pay up front $600 is too much of a barrier for me to test whether that is a suitable option for me - even though Iād just be using it myself, not as a āteamā.
Itās frustrating to have such a low cap on the chat when I run into many failures with GPT outputs that count towards the limit.
Cap is 19 in 3 hours for me right now
I suspect the low cap limit due to the rollout Copilot .
If it wasnāt for Copilotās 2000 character prompt limit and 30 questions limit threads I would have switched to it by now.
Limit is now down to 20 prompts⦠What is this?!?
The use of GPTs now should have been adjusted so it doesnāt consume or have a usage limit different than plain GPT-4.
For those reporting fewer interactions available, are you like the screenshot a few posts above where you are generating DALL-E 3 images in succession?
It is natural for image generations to have different limitations, and you should view ChatGPT as just a portal to that computationally-costly feature. Transparency would be helpful ā but it would also be helpful in having people maximize and exploit their use of the shared resources. Even 20 images would have costs approaching $2 on the API where you pay per image, and more than that when using DALL-E 2 via prepaid image credits on labs.openai.com.
my message cap reached after just 18 message. realy what is wrong with this company. also 40 message is way to low for paid members
yeah, going to test out copilot - see what it feels like there - these restricitons are rediculous
I just subscribed to chatGpt a month ago, and it is absolutely worth the money, HOWEVER this issues must be dealt with in the near future.
40 message means 20-20 question-answer. So We have 20 questions/3 hour. Sometimes it is enough, but when it is not, it gets really annoying. It is especially low when I use Dalle to generate a picture, cause it takes around 5-7 prompts till I get a remotely acceptable one. I understand that right now openAIās main focus is to get as many subscribers as possible and corner the market as much as possible till other competitors will appear (like google), hence they must use some limitation to make sure everybody can get a taste but I think it is not fair to us, already paying customers(not mentioning the fact that this limitation is not indicated anywhere when You subscribe, which is quite unethical I might add). Anyways I could go on āwhiningā, but rather letās focus on the solution. Midjourney offers a cheaper (8dollar) limited version and a more expensive (24dollar) unlimited version that comes with unlimited relax gpu time and also some limited fast gpu time. However, If the fast gpu time is not enough and speed is of essence there is an option to purchase additional fast gpu time (for a really expensive price). I find this approach just, and Iām pretty sure something like that can be applied here as well, so dear OpenAI team please hear our āroarā and make this your priority cause this limitation is getting more and more annoying!
Thanks in advance Guys,
Cheers
I am locked after 20 messages, I thought the limit is 40. Is it a joke?
Youāve reached the current usage cap for GPT-4. You can continue with the default model now, or try again after 2:18 PM.
OK, whatās going on? Did something change recently? Last couple of days, GPT4 is basically unusable. Canāt get any real work done, started exploring other stuffs. Bard seems like itās creeping up, also exploring smaller llm on local hardware in a hybrid setup for task specific needs. Perhaps I can get what I need out of a local hardware setup?
Anyone else experimenting?
I like the idea of pay as you go. Whatever the cost may be, so be it. I was able to type 3 prompts today and the app just hung up forever. I didnāt even get a notice that Iād reached any cap. This is uttlerly useless and really disappointing since it has been very useful until now. I would also be willing to pay more for a plan with higher limits. Signing up for the Teams subscription is just BS. I donāt want to have to manage two different usernames just for the right to have higher caps. I canāt do 100 prompts in one username and then continue the chat on the other username.
I guess OpenAI / Microsoft has decided that theyād rather work with Enterprise businesses than ordinary users. I can see why; there is more money in Enterprise support. But abandoning us retail users ā or rather making ChatGPT completely unusable for retail users (the same thing) but pretending to continue retail support (that is even worse!) ā with no warning or explanation is FU.
The new capping system is a perfect example of the old ābait and switchā scam. Enjoy being Fād over, everyone!
Until something changes Iāll go over and try HuggingFace chat. They have ~10 different open source LLMās. Maybe I can find one that suits my purpose. I pay nothing, and Iām not giving OpenAI all of my prompts for their training. They should Fān be paying us!
I agree that GPT4 has become unusable for any real work at this point, however, I donāt agree that this is a bait and switch or consumer VS enterprise thingā¦YET.
After digging in to what it actually takes to train and operate such a thing as GPT4, I have a new found respect for the issues that OpenAI is most likely facing. To give one example, if you dig through hugging face, you can find a model called bloom. Trained on something like 176 billion parameters and cost something like 5 million dollars to do this, considering that GPT4 is estimated at something like 800 billion parameters (need verification on this number). The cost to train must have been astronomical and the amount and cost of equipment required to run our queries must also be nothing short of amazing.
I suspect, the current issues are more related to up scaling hardware and the crazy number of people trying to use this. Ultimately, the AI gold rush may come down to who can scale up the hardware needs smoothly and nail the subscription tiers / cost without alienating the customers.
Iām rooting for OpenAI, however, my work still needs to get done.