How to understand and handle error codes from the OpenAI API?

Hey folks! We just shipped a new guide in the docs on what all the different error codes you might get are along with how to handle them, check it out and please send feedback if you have any: OpenAI API

4 Likes

Hi
Thank you for this. I have been meaning to add some error handling that gives my clients more insight into why a request has failed.

However, I have one observation about this error:

429 - The engine is currently overloaded. Please try again later.

Shouldn’t this error be in the 500 range as its a server side problem?

4xx - This class of status code is intended for situations in which the error seems to have been caused by the client.
5xx - Response status codes beginning with the digit “5” indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request

I feel a little embarrassed that I am quoting wikipedia and not an and RFC but I am pretty sure this is accurate.

`

This 429 Error code is from Cloudflare, not the back end server:

See:

Cloudflare: 4xx Series Error Codes

That RFC defines 429 as

“The 429 status code indicates that the user has sent too many requests in a given amount of time (“rate limiting”).”

I agree, that is a user error.

But the OpenAI the description is

“429 - The engine is currently overloaded. Please try again later.”

The later is not describing a user error but instead calling out the server (as the “engine”) as the cause of the error. And rather than admonishing the user for making to many request, it suggests they try again later as if they have done nothing wrong.

There probably should be an error for to many user requests at once. But experience is that the OpenAI server is also failing because it is overloaded. So a 500 error for that would be appropriate. (I also feel I have seen one in the 500 range for that; I will have to keep an eye out).

Yeah, it’s an application in beta, per the T&Cs, so of course things need to be polished and refined.

There are always issues with error messages and feedback to users during the beta stage of application development; but as someone currently working on my third OpenAI API app, these little beta issues are not very annoying to me, but I can see and understand how others are annoyed by them and wish they were much better.

It’s good to document these issues. Well done.

Thank you. I’m not complaining. I’m trying to contribute.

1 Like

Yes, you are a strong contributor @paul.armstrong and I always enjoy reading your posts.

Thank you.

2 Likes

Hello :slight_smile:
I’m using the model: ‘text-davinci-003’ in a nextJS typescript project and it works perfect from localhost:3000 but when I deploy to github and Netlify deploys the site, I get this error:
react_devtools_backend.js:4012 Error: Request failed with status 502

I checked devtools and I have “Break on warning” disabled . . .
I read in another forum that maybe application/JSON get’s converted to text/html on build and this could produce the error online but I haven’t been able to solve it . . .

Any good ideas out there?
Thanx

@schnurr any suggestions here? I am not that proficient in Node.js

1 Like

This could be it? I’ve seen one other Netlify GPT project, I think.

Can you add “console.error,” so we can get more information?

Was it a one-time thing or does it happen multiple times? Always the same prompt?

Sorry you’re experiencing this! This sounds more like a client-specific or environment-specific error considering it works on your local machine, and I think this particular help article is more about error codes returned by the backend API itself.

Can you open an issue in the openai-node repository? I agree with Paul it would be helpful to see a full stack trace.

I made sure it returned JSON with accept in header So that works but still I get the error when online.
It happens on every call … . it’s like the API is rejecting my post . . .

Hmmm . . .fulll stack tree . . . You can test the site here: https://blockdefi.netlify.app . .- left menu (You don’t need to connect to use the OpenAI) . I’m not so good with reading errors in console, but I found this:

Could it be an authentication issue? . . . I found that my APP_DOMAIN needs to be set to=domain.app while NEXTAUTH_URL must be= https://domain.app . . . Further more . . . Is it path specific in some way? . . .
Like if request comes from domain.app/api/auth/script.ts but somewhere is set simply the domain? . . . I don’t know . . . I’m reaching . . . . I’ve been struggling with this issue for 2 weeks for an exam project and I feel I’ve tried everything . . .

Hello,

I’ve noticed a change in your error messages pertaining to the invalid_request_error type and context_length_exceeded code.

Previously, the error message was more detailed and looked like this:

This model\'s maximum context length is 8192 tokens. However, you requested 13576 tokens (5127 in the messages, 8449 in the completion). Please reduce the length of the messages or completion

Recently, however, it has been simplified to:

This model's maximum context length is 8192 tokens. However, your messages resulted in 11775 tokens. Please reduce the length of the messages

I found the previous format more beneficial, as it allowed me to adjust my token count based on the detailed breakdown provided. The new format, however, doesn’t offer this assistance.

While I strive to calculate the max_token size accurately on the first attempt, I’m constrained by the fact that I’m working in app script and lack a precise method to count tokens. If the first try fails, the detailed error message allowed me to ensure that my second attempt would succeed. With the new format, I’m unable to do so.

My preference would be not to parse the error message, but rather to receive more structured information on token counts when an error occurs. This would greatly facilitate my handling of such situations.