It is likely factored into how much they charge for GPT-4, considering that you are probably going to sleep for part of the day. If they provided a 24-hour cap, they would likely need to increase the price or lower the rate limit. I am personally not a fan of either option.
For now 200/3 should be fine for now, in my opinion. 24 hour probably lay in the future still. They can do that without too much money spending.
Also this is misleading, I was about to make the team upgrade to have more cap and space and features but then I was told I should pay 50 because its for minimum two users.
The part of not being able to talk to the company when one is a paying customer tastes extra foul.
What would you tell them, other than giving them an earful?
I`d like to ask them how I am supposed to create a custom gpt within the current usage cap. When they plan to inrease the usage cap. Confirm with them that as a paying customer I can create any gpt I want as long as I keep it private. Last one is crucial. I get why they have to have certain policies in place for public gpts, but said policy cannot be applied to private gpts.
I can help you answer some of these questions:
slowly, ideally not with the GPT builder
itâs possible that they donât, at least not soon. The usage cap has been going down, not up.
thatâs a negative. that doesnât even apply to azure clients with a monitoring exemption.
unfortunately, OpenAI products arenât really for everyone. Depending on what youâre trying to achieve, you may be better served by the API product, if you have some programming chops. The API is (mostly) unlimited, but could be more expensive overall.
If you want absolute privacy and freedom, youâre probably not gonna get around hosting your own model
I understand there being a usage limit, but there needs to be some sort of transparency and consistency. Like a LOT Of people, I got hit with the limit at 19 messages (I went back and counted them), NOT 40 like advertised. Advertising 40 messages, then blocking us for 3 hours at 19 is not ok, itâs lying about what we get for the paid service. Alternatively, thereâs no way for us to know how close to the limit we are, it just happens and then we have to wait a random amount of time (for me it was 2 hours? for others itâs anywhere between 1 and 3).
How is this paid service worth the money if itâs not transparent nor consistent? Honest question.
If Iâm not mistaken, the time starts from the first message. Like, at the start of the day I use it, then I have 3 hours from my first message to use all 40 messages (or however many we get).
This is correct, it is a sliding window. In any three-hour period you cannot exceed 40 messages.
Alright, thatâs good to know, but again- I, and a handful of other posts I have seen here, seem to be randomly capped at 19. Is there any reason for that to be happening that you may know of?
Initially custom GPTs had a different cap than the base GPT-4 model in ChatGPT. The caps have since been unified. Some people (not all) might still be reporting previous events.
Itâs also difficult to verify these reports because the limit is 40 across all chats, including in the GPT builder interface.
It would be helpful if there existed a log page in the account a person could audit to verify. Just a chronological listing of all messages sent to the model with timestamps, but there isnât.
Historically, this limit has been somewhat dynamic to account for demand and capacity, but itâs not clear if this is currently still the case.
Hardly used it today, and got presented with the usage cap message, may be about 300 tokens were used , a few paragraphs but presented with the message in subject title.
I want to use GPT-4 as a pay-as-you-go service using an API key when the usage limit is reached, is there a good tool for this?
I run into similar issues, especially if attempting to use code interpreter. It counts each time it calls the tool and all tokens uses, even if it fails to use the tool as separate messages as best as I can tell. Sure, I can take a break. But, isnât that being told how to operate your own business at that point? Or if youâre a creative, itâs being told when you can follow your inspiration.
Message Caps:
Is it better than not having access to the tools at all, of course. Defending it as though itâs a good thing, isnât helping anyone though.
Ignoring the lost ideas and real harm that may cause individuals in their own lives because thatâs just how it works is a shaky defense at best.
Imagine Adobe saying you can make whatever photos you want, but you can only save 25 times per 3 hours. That would kill using photoshop to retouch frames from films.
Solution Option:
You could instead implement a queuing system that just gives a 5 minute countdown until the next inference response so when the message caps are reached, you run through a que to infer a response. This would allow the work to continue without being left feeling like youâre put in time out for using the tool you are paying to use.
For many users $20 per month is a huge amount, especially for those who have not figured out how to use it to increase their earnings in any way. Maybe the intent is just to let those people be left behind?
My Solution:
I use 3.5 for sorting and classification tasks, or swap to copilot and bard and claude depending on types of tasks to complete. Ultimately, generative AI is an iterative process, at least how I use it, so any individual response isnât as critical since I already expect Iâll need to review and rewrite it again at least one more time.
But I am also able to empathize with others who may not be as agile or able to adjust their tasks as I am.
Shouldnât the aim of this conversation really be how to make sure the system allows all ships to rise?
I have a suggestion for the OpenAI team. I understand the caps on the GPT4 usage due to the cost of running it. I do have a slight frustration with that and creating a GPT.
Situation: I was creating a GPT, I didnât think I used the 40 prompts but maybe I did, I did utilize a few prompts in the actual communicating with it side as well. I have a GPT partly created and got the message I have reached my current cap.
Frustration: I am now forced to do one of 2 things. Either keep the screen there open and wait 3 hours, or lose the progress on creating the GPT. The source of the frustration is while the answer on the create a new GPT is that I have reached my cap it wonât even let me save the GPT I was creating so that I can come back to later to update/edit it.
Suggestion: Allow the functionality to save that creation in the middle of it even when the user is out of prompts, possibly have a different bank of prompts specific to creating a GPT (this is a more complicated ask then allowing the save I understand this). The most frustrating part is not allowing the save in there to then allow me to be able to utilize the chat 3.5 while I wait for the time for the prompts to be reset.
The work-around I am doing currently is saving the prompts that I did from this building a GPT into a notepad file and then I will start this particular chat over. Thankfully this specific chat is only a few prompts long.
The other thing that would be a nice improvement is if a prompt is only an affirmative or negative response to a suggestion from GPT let that prompt not count against the usage (ie: Chat: âDoes the name __________ work for you?â me âYesâ)
Regardless of the length of the response, there is still a not-insignificant amount of computing power spent in processing the incoming message and history context. So this will not likely ever happen.
I agree that having more messages would be fantastic. Period.
One observation:
I suppose if you still need to figure out how to earn with these tools the open source community has cheaper offers.
Ultimately OpenAI offers a huge userbase and it will be interesting for many developers for years to come.
I hear all of the âitâs a stealâ comments, but itâs not that much of a steal when 25-30% of those 40 message receives the following error message
âIâm unable to generate the image you requested as it didnât align with the content policy. To move forward, please provide a new request or modify the existing one, ensuring it adheres to our guidelines.â
Also, I know my prompts are not violating any policies b/c I then prompt the chat to evaluate the prompt and it responds with:
âThe provided prompt does not violate any policies for ChatGPT or Dall-E. It adheres to the guidelines by focusing on creating a respectful and non-explicit image. The description aims for a hyper-realistic portrayal of an African American woman with specific makeup and hair details, fitting within the bounds of artistic creativity without infringing on anyoneâs rights or depicting inappropriate content. The request for a landscape orientation and referencing a previous artwork style is also appropriate.â
Then finding out Iâm getting this message simply because the platform servers are overloaded. A Itâs not that much of a steal at that point. Itâs extremely problematic, to be quite honest.
There are some features that are not available in 3.5, ie Dall-E. I actually have subscription with ChatpGPT plus and Midjourney. Iâm finding that I prefer to spend my time on Midjourney. When I am in my creative state, I donât want to deal with the limits and I definitely donât want to deal with the false policy violations/error messages. Currently, I have a project that has taken me over 5 days and itâs still not completed that if I had initiated in Midjourney I would have had completed in 1-2 days. You all can keep being combative with us as we are stating our valid concerns and issues or you can take notes on what we are saying. Iâm assuming by the way you are going toe to toe with us, you work for OpenAI and have major say-so in the matter.