After the latest update, ChatGPT has been severely dumbed! It’s almost useless. The output tokens have decreased to less than 1000 tokens. I cannot make him produce bigger content. If they don’t fix this asap, they will lose 90% of their subscribers. That’s insane.
What are you talking about? It’s doing fine. That sounds like a prompting issue, because ChatGPT on my end has seen absolutely no performance degradation whatsoever, and I’m using it ~8 hrs a day.
I don’t believe you. It’s probably that you use it for simple tasks and you don’t need large token outputs. I have been using it since day one and it’s totally unacceptable. Context size has been greatly reduced, I cannot make him return 1k token output. It replies super short and misses at least half the details. Try giving him a big document and ask for a summary. I tested Google’ chat-bison model on a document and it returned 1800 tokens of summary. Same document and prompt with ChatGPT and I got 600 tokens. Complete joke.
It’s probably that you use it for simple tasks and you don’t need large token outputs.
I do not use ChatGPT for “simple tasks”. In fact, the complexity with which I use it (which, btw, I have also been using since day 1) has allowed me to become an active and helpful contributor on this forum to teach others how to write effective prompts. I am not here to argue, I am here to help. I am not trying to shame you, I am simply trying to tell you that your issue is likely due to the way in which you’re prompting the model.
Now, are you going to be argumentative, or do you actually want help? I think this should demonstrate enough.
I agree that you are an active contributor but you are definitely not helpful
You just provided screenshots that proved me right. Your output is 700 tokens. If you were really proficient, you would have checked that.
The average lengths of outputs was shortened a while ago. Now, it will go up to around 1024 tokens, and a button will appear if you want to continue the text.
Hey, there is no reason to be such an asshole and act like this, especially from people who are just trying to help and figure out how to solve your problem.
Many other people in this forum would disagree with your statement about my help. I am proficient, because I don’t need to use token length as a measurement of qualitative responses. Just because something is longer, doesn’t mean it’s better. Length and quality do not correlate.
No need to cry and call me an asshole just because you tried to prove me wrong but eventually you embarrassed yourself displaying limited knowledge of what I was talking about.
Also, no need for your excuses, I started this topic saying that the output limit has been greatly decreased. Nobody cares about your “qualitative responses” and your second attempt to look proficient. If I provide him with 8k tokens for a summary or analysis and it returns me only 600 tokens, how qualitative this response do you expect to be? Hint: 0 (z-e-r-o-).