Today, using GPT4 with ChatGPT Plus, I receive this error repeatedly after entering a prompt:
“The server experienced an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.”
Occasionally, when I resubmit the prompt, a response is generated, but I usually see this error again.
It seems each prompt resubmission after error counts against my prompt quota, as I’m now looking at this:
“You’ve reached the current usage cap for GPT-4. You can continue with the default model now, or try again after 11:41 AM”
I received 5 total replies in this session. One of them was in response to my prompt, “Please continue your most recent response, which cut off prematurely.”
Same issue here, been happening since friday. Don’t know why I’m paying when I can’t even use it.
Responses became much shorter as well. And generating speed is very low. It seems that servers don’t have capacity for so many users.
Same here, having the same issues over and over. Generation is much slower when it happens. But 90% of the time I get the error, “The server experienced an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.”
To be fair, GPT-3.5 runs without issues, which is technically what we’re paying for. GPT-4 access is still considered a preview and is a bit of a bonus for subscribers. Though there is a stark difference between 3.5 and 4, which makes it hard to go back sometimes.
It’s working more reliably for me now. It must just be a transient overloading … but it would be nice to not burn through prompt credits on queue submissions that get rejected.
to increase the length you can try the following:
By short i meant that it ends on half sentence and i have to use “continue” prompt.
Ah thanks for specifying @drugaga2!
So maybe this helps you: