Error in response allmost 90% of the time

im getting:
{‘error’: {‘message’: ‘The server had an error while processing your request. Sorry about that!’, ‘type’: ‘server_error’, ‘param’: None, ‘code’: None}}
all the time today?
what is the reason? can this be a prompt issue?

1 Like

Given both the lack of context in your use case, and the lack of context in the error response, it is difficult to determine the nature of the error.
Was the API call previously functioning properly, or have modifications been made recently that could be capable of breaking the API call. An API call has a rather delicate syntax that needs to be followed, and often small easily over-looked errors in the call can lead to catastrophic failure such as what you are experiencing.
Perhaps catastrophic is a strong term for your situation, but I understand it can be frustrating when things aren’t working. Providing more details could help in discovering the issue.

1 Like

I think that it’s an error on their side because many people have the same issue getting 429 errors for nothing. Hope their are going to fix it soon. For now stop your request because you’re still paying for the prompt even if there’s an error.

1 Like

I’m having the same issue. Today, my prompt only works like 20 % of times. I need to keep resubmitting the same in order to get a result 1 out of 5 times

1 Like

i have a short prompt with the following code
dataAI = {
“model”: “text-davinci-003”,
“prompt”: “Write a short example text on:\n”+prompt+" \nExample Text:\n",
“temperature”: 1,
“max_tokens”: 400,
“frequency_penalty”: 0,
“presence_penalty”: 0

“prompt”: “Write a short example text on:\n”+prompt+" \nExample Text:\n",

This parameter could be causing an issue depending on the use of the variable ‘prompt’.

In addition it could be the use of ‘text-davinci-003’ specifically. Is the issue consistent when issuing ‘text-davinci-002’?

yes its still happening in 002

Not sure what your full prompt is, so I just entered the prompt as you posted it, missing a value. Seem what one would expect such an abstract prompt :slight_smile:

the promp i used in the code was: pool party invitation.

According to the page:
Yesterday it appears there was a widespread influx of errors while using the model. Perhaps today a similar scenario is occurring. I assume the issue will be tackled shortly, and you will be prompting with limited interruption.

Here ya go… @dan2






etc etc.

As you can clearly see, the prompt is not an issue and I’m seeing zero errors.


My advice is to limit use of the model at the current time, perhaps only using it 10% as much as you have been. This would eliminate that 90% chance of an error occurring.
Just kidding, but only 90% kidding.
But for real, I do advise referring to the page
in the event errors persist, there may be updates coming soon regarding the issue.
There is an option to receive notifications regarding operation status.

1 Like

it’s working really bad lately, I’d say more than 50% of users are having problems at the moment, I guess there’s no official answer, as there is no official change :wink:

Its natural
But every time that this error accure to me i must change the chat to a new one.
Is it me or you guys have this too?