Continuous gpt3 api 500 error: The server had an error while processing your request. Sorry about that!

Still getting this even today… what in the world is going on?

{
  "error": {
    "message": "The server had an error while processing your request. Sorry about that!",
    "type": "server_error",
    "param": null,
    "code": null
  }
}
2 Likes

Suddenly got the same problem yesterday!

My side has lasted 30 disappeared, too painful for me.

Amazing, days later, still the same error exists on almost every other submission.

{
  "error": {
    "message": "The server had an error while processing your request. Sorry about that!",
    "type": "server_error",
    "param": null,
    "code": null
  }
}

The error are smae. Is there official server status openai api? It is very ctirical.

Same problem for us, manifests as a 429 error with the body “The server had an error while processing your request. Sorry about that!”

No response from OpenAI on their Messenger either…

1 Like

No response at all from anyone at OpenAI. There must’ve been hundreds of people who have messaged them by now… dozens of posts here on the forum, and not a single word from the OpenAI team. It’s frustrating because, for many of us, our products rely on OpenAI to function… and they are broken now. Having 30-50% failed/blank outputs is simply not feasible.

1 Like

There is no need for a delay in loops. The API limit for requests on paid accounts is 3000/minute. It makes absolutely no sense why anyone would put a “delay” in the loop unless they are looping over 3000 items (which, using davinci, would almost certainly take over 60 seconds).

This current bug has nothing to do with a delay. No matter how much you delay, there’s still a 30-50% chance it will fail and you will receive a “server_error” “The server had an error while processing your request. Sorry about that!”

https://help.openai.com/en/articles/5955598-is-api-usage-subject-to-any-rate-limits

Totally not true in reality.

Maybe your hardware and network connection are slow, so you don’t need a delay :wink: I’m on fast hardware with a direct connection to a national fiber optic backbone.

I never get the errors posted here because the API is beta and overloaded (and delays work).

It is a BETA…

:slight_smile:

Note. Here is a real-world example (not theory) of embedding loop which fails (on my M1 Mac Studio on a fiber optic national backbone) tweaked after extensive testing if there is no delay in the loop, which I run often. I can “get by” with a 1 second delay, but since the API is beta and there are a lot of users, I go “easy” on the API with a 2 second delay. This partial code block queries a DB for completion entries with nil vectors and if nil, calls the API to get the vector from OpenAI. The output is an array of hashes which will be passed to another method to update the DB.

module Embeddings
    def self.get_client
       Ruby::OpenAI.configure do |config|
          config.access_token = ENV.fetch('OPENAI_API_KEY')
       end
       client = OpenAI::Client.new
    end

    def self.update_nil_completion_vectors
      nil_completions = Completion.where(vector:nil).pluck_to_hash
      if   nil_completions.length == 0
         puts "Embeddings.update_nil_completion_vectors:  Nothing to update."
         return []
      end
      updated_nils = []
      nil_completions.each do |completion|
            model = "text-similarity-ada-001"
            begin 
               vector = get_vector(completion[:reply], model)
               if vector.length > 10
                  Completion.where(id:completion[:id]).update_all(vector:vector,vector_model:model)
                  updated_nils << { :id => completion[:id], :reply => completion[:reply], :vector_model => model }
               end
            rescue => exception
               puts "Embeddings.update_nil_completion_vectors error: #{exception}"
            end
            # loop fails without this delay, with API errors
            sleep 2
       end
       updated_nils
    end

   # ....
 end

Here is another partial code block which also sometimes fails (OpenAI API error) unless there is a detay in the loop:

       if @fine_tune
            @fine_tune_id =""
            @fine_tuned_status =""
            if @valid_jsonl 
                @fine_tune_id = FineTunings.tune_from_text(@text)
            end
            sleep 2
            if @valid_jsonl
                @fine_tuned_status = FineTunings.status(@fine_tune_id)
            end
        end

The UI (adding methods weekly)

Code above from my lab:

Finally…

Since we all know the OpenAI API is BETA and the API is overloaded; I also consider it “good API manners” to slow down loops to take pressure off the API. Maybe that is why I do not see or have API failures.

Be a good programming citizen during BETA ! APIs need love too :slight_smile:

:slight_smile:

I use the API on localhost, not slow at all. I generate entire websites, and have hundreds of prompts being submitted in a loop at any given time… No delay. Before this bug, I’d occasionally get a blank response, but it was very very rare. Now it’s 30-50% of the time, even with a 2-3 second delay. The delay makes no difference.

I used the following libraries to work around this issue:

ratelimiter (github)
retrying (github)

Example usage:

import openai
from ratelimiter import RateLimiter
from retrying import retry

from secret import OPEN_AI_API_KEY

openai.api_key = OPEN_AI_API_KEY


@retry(stop_max_attempt_number=10)
@RateLimiter(max_calls=20, period=60)
def generate_openai_completion(
    prompt,
    model="text-davinci-003",
    max_tokens=100,
    temperature=0.68,
    number=3,
    top_p=1,
    best_of=10,
    frequency_penalty=0.8,
    presence_penalty=0,
):
    response = openai.Completion.create(
        model=model,
        prompt=prompt,
        temperature=temperature,
        max_tokens=max_tokens,
        n=number,
        top_p=top_p,
        best_of=best_of,
        frequency_penalty=frequency_penalty,
        presence_penalty=presence_penalty,
    )

    return response

is your problem solved?

Been having outages for the last hour or so. Nothing on the Open AI status page.

This is a simple rate limiter that has nothing to do with the error, all it does is limit the requests to 20/minute. The limits for paid accounts are 3000/minute.

1 Like

This issue has been ongoing for 72 hours (or more), and there has yet to be an update on the OpenAI status page. It seems to be getting worse, I’m getting ~50-60% error rate now.

I’ve got the same issue here. I’m already throttling back hard when I get errors.

The embedding is fine - when users ask for completions, it causes grief.

I’m changing the error messages to let users know that OpenAI’s service is the issue, not my software. I don’t see any other options right now.

I agree, I encounter the error more often than yesterday.

I’ve managed to counteract it by adding 10 curl retries (using php), oftentimes I need 3+ requests before one works, but it does eventually work. This is without a doubt a problem with OpenAI servers, and hopefully, it’s resolved soon.

Because I’m in the same boat, people are asking what’s going on and I don’t really have an answer besides that it’s on OpenAI’s end. The worst part is that OpenAI’s API status has shown “fully operational” the entire time the error has been happening… which makes some users think the problem is on my end. I’m going to link to this thread so users can see that many people are having the same problem.

1 Like

It’s hardly usable today.

yeah almost every request… getting this error, they should do something with that

1 Like