When I started using ChatGPT back in March with my Plus subscription, the character limit per message using GPT-4 was more than 30000. Over the last couple of months, it kept going down while GPT-3.5 remained relatively the same. As of today, the character limit for the model 4 has gone below 20000 (see gif below).
One of the reasons I subscribed to use GPT-4 was because of its accuracy compared to GPT-3.5 when presented with a larger amount of prompt data. However, not only is the character limit for GPT-4 now less than 3.5, I no longer see any difference in performance. In fact, GPT-4 seems to make more mistakes when dealing with 15000+ characters of data, and it often gets stuck in a loop when providing lengthy responses.
So, why is the Plus subscription still a thing when I can pretty much get the same results with GPT-3.5 and the Bing browsing plugin is pretty subpar?
This is the exact same thing I’m experiencing. I’m pasting my same prompt into both 4 and 3.5, and 3.5 can take it and generate a response, and 4 tells me that ‘The message you submitted was too long, please reload the conversation and submit something shorter’. How do you allow your free users LESS LIMITATION than your paid subscribers? My mind is literally blown.
6/27/23 … & ChatGPT4 just continues to get worse.
I guess we are a lot in this problem.
As a screenwriter, I need to upload long synopsis, or movie script, and it’s nightmare.
1 - Does anyone heard about a solution ? With an API, to be able to solve this problem ?
I saw this → ( cannot include youtube link here, but type " Uploading Files to ChatGPT: A More Powerful Experience " ) first entry.
2 - I heard that GPT4 is gonna come back soon to a 32000 character prompt…
Is it true ? Do you have any dates ? heard about something ?
OMG yes please! It’s sooooo frustrating
I’ve noticed the same thing, instead of chatpt improving the user experience, things seem to get worse. I bought chatgpt plus for the very same reason, to be able to send long text input (mostly for code) to gpt-4. Now I can’t even do that, as well the responses that gpt-4 gives seem to have just degraded in quality overall. Currently I feel like gpt 3.5 feels way better than gpt-4, and that’s just sad, because then why am I paying for plus? Maybe its because of the messaging limit which was raised to 50 messages every 3 hours now? I’m not sure, it’s just frustrating. Hopefully we’ll see some positive changes soon.
I noticed that the August 3 version lowered the word limit again, didn’t it?
Lol, the august 3rd version seems to have lowered the prompt length to like 100 words.
Yes. Sadly I find myself switching to Claud whenever I need that extra chars.
why make the product worse than it is I don’t know, but someone at Openai has decided that it makes good business sense. For coding task, the platform is becoming practically useless as you barely can paste a method of a few lines without an aerror.I am not here forever. If that is the direction, then I will become a customer of opportutineis and jump to the next train as soon as one leaves the station.
They kinda have to, considering how GPT organizes information.
If they dont limit the context window explicity, then it will immediately forget what it was thinking about in everything up to that point.
This is a nuanced issue.
best as i can tell, GPT3 and 4 has a seperate history processor.
In which it skims chat history to find context to the current message.
I dont know if thats true exactly, but, it makes sense to do it. In fact… It makes sence that it should NOT do anything but… it.
Yeah I cannot get it to pull through even the smallest amount of python code without just freezing.
For longer texts i usually use gptalk .io since I can just upload the whole document and the GPT is able to reference it easily.
The only downside is that the program is more meant for online chatbots, otherwise it works pretty well.
I do have the same problem. Just bought gpt4 to handle longer codes and indeed it is worse than GPT3.5
Today I am being told by GPT-4 that the new character limit is just 2048 characters! How can it be so small?
The AI has no knowledge about its platform or even what GPT-4 is. It is just answering from trained knowledge from 2021 and earlier.
We only need to update the AI’s knowledge cutoff to make it imagine future possibilities about what it doesn’t know about itself:
The question isn’t one that is answerable by AI, but rather is reinforced by fine-tuning on producing small outputs or denials for tasks that seek the previous quality of GPT-4.
Thanks for this info. It seems like I hit a snag over the weekend when GPT-4 was being very temperamental, inventing all sort of limits and even constantly asking me to wait a few moments for responses, all of which turned out the a hang rather than a wait. Things seem back to normal now.