RateLimitError by using text-davinci-003

Getting this error while using the code below. Is OpenAI truly experiencing overload?

If that’s the case, why am I able to successfully process certain rows of my dataset when I rerun it?

It appears to be an intermittent error, despite my efforts to be friendly to the API by implementing a 5-second delay for each line.

for index, row in df.iterrows():
    response = openai.Completion.create(
      model="text-davinci-003",
      prompt=f"Classify the sentiment in this tweet:\n\n{row['text']}\n\nSentiment rating:",
      temperature=0,
      #max_tokens=60,
      top_p=1.0,
      frequency_penalty=0.0,
      presence_penalty=0.0
    )

    score = response.choices[0].text.strip()
    scores.append(score)
    time.sleep(5)

RateLimitError does occur not only when AI experiences overload, but also when you hit API too many times in a short period of times.

EDIT:

You will find the limits for each model here:

1 Like

Thank you for the information. I have already read it and conducted a test on the row that consumes the most tokens. Based on that, I have set a delay of 5 seconds for each line, considering there is no urgency for completing this dataset.

In theory, this delay should ensure that I have enough time to stay within the necessary limits while processing the rows. That’s why I find it strange.

It has been running for 18 minutes now, and it appears that the issue has been resolved.

Thank you for your time, best regards.

1 Like