They claim this AI can handle all this text And it can do all this and all that making it look good but you ask at 50 questions then you get the warning for the usage limit.
I never got a warning that there was a data cap usage when I signed up.
Seems like you’re trying to make an extra buck to pay for another service that they offer.
They should be paying me for training their AI system for all the mistakes it makes and I have to pay for it by getting data cap usage
You dont loose the history of the conversation. Ofc you can still interact with it using the 3.5 But be careful, if you are using a custom gpt you will not be able to interact with it, cuz they work only with the V 4.
Really, if someone is trying to be productive and truly use ChatGPT the 40 messages in 3 hours is just insane. I don’t want to spam and personally would be ok with up to $50 a month if they removed the cap altogether to show that I am serious.
To truly use AI it takes conversation back and forth when researching a topic, and 40 messages is very limiting. I am almost ready to give up my account and use Claude when the next updates comes out as in many ways it is catching up to chatgpt fast.
Will the ChatGPT 4 conversation automatically switch to 3.5 for the remainder of that conversation, or is it a new conversation where I can start a 3.5 chat?
I wouldn’t prefer the former because it is easy to miss that the model changes midway.
Same for me. The GPT-4 during the last week became extremely slower, so did the DALLE - and the DALLE image quality somehow becomes even worse than it was under high load a month ago despite the forcibly reduced number of generations per message.
Also, which is awful, the limit for GPT-4 was lowered silently - no public notification on main page, news, etc. - nothing. Just one time I reached the message limit suspiciously faster than before, then during the creation of the new chat, I noticed that the limit was decreased. What’s worse, the model now specifically tries to shift the weight and the solution of given tasks from itself to user.
It still writes gibberish in form of “%elongated description of your full task% is exciting/challenging/fun! I will gladly proceed with it.” However, when the real process begins, instead of solving the given task, gpt-4 gives the brief explanation, and then sends me to “learn the concepts from guides”, “study it”, “this is complex and may require a specific knowledge” (spoiler: it’s not that complex for gpt-4 capabilities) etc.
But it’s exactly why I addressed the gpt-4 - to solve my task and guide me in details, step by step, listing either all the options, or giving me the most widely used best practices. If I could or wanted to do it myself, why would I use the paid service?
Also, its answers seem to be significantly shorter than before in their meaningful parts (however, the pseudo-polite token-eating gibberish in the beginning and the end of its answers remains as long as before). And it tries to ignore direct instructions regarding its performance and output: for example, even when explicitly asked to provide the full code instead of shortened extracts, it agrees and still writes the cut version of code with many omitted parts.
It is extremely frustrating and counterproductive to rapid testing and iterating. The custom GPTs can’t even be tested using GPT-3.5, so after a 30-minute flurry of activity, there’s a forced 2 hour wait. Boo!
I’m absolutely infuriated with this limit. What line of thinking does 40 in 3 hours make sense. Greed? Insanity? Stupidity? Lack of Reality? Lack of Care?
I go through heavy troubleshooting lines, then GPT3.5 appears and sends you down ridiculous directions, platitudes, and basic help guides.
Its insane and this is probably where I’ll move on, to something other if its not resolved.
same here… That’s not nice for peoplo who is working with it.
Please, tell us when this is gonoa be fixed or more clear for the users to know when its gonna be blocked. Because i want to know if ill continue or not .
This new update has essentially made GPT4 utterly garbage, and completely unusable… 40 Question limit? Are you kidding, when it takes 40 questions just to train it to give you an accurate response half of them being hallucinations and the others half truths. What is the point of even having GPT4? Cancelling my subscription, at this point GPT 3 can do everything as long as I do some extra heavy lifting, GPT4 was really only good for extrapolating information but the 40 question per 3 hour (which is ridiculously insane) limit makes GPT4 entirely non existant.
I have the same problem. Not knowing your (apparently individual) limit at the moment is highly unacceptable for paying customers. Arguably, having a limit at all is also an issue…
@OpenAI
I highly recommend that you (at least) implement an indicator showing where the current limit is and how much of it has been used so far.
Not knowing the prompt limit will lead to users trying to cluster everything in one prompt, to save some of those future prompts.
→ This will likely reduce the output quality, thus result in lower customer satisfaction, etc., and that can’t be the goal of your product philosophy…
I’ve been using the Pro version for two months now, and today is the first time I hear about and encounter a usage limit for GPT 4. It seems like it is at 20 messages per hour or something like that. I agree, OpenAi should have communicated this limit. Even when asking ChatGPT, it says there is no usage limit. Must be a new thing. This is frustrating though, since GPT3.5 doesn’t have the features that I need. If this limit is really this low, I am seriously considering to end my subscription. What’s the use in paying for something that I can’t use?
The thing I bought by paying money tells me you have a limit. I hope it is known that this is ridiculous. I’ve been using this since day one. Also, I have been a paid user since the day it first came out.