Does OpenAI charge you for failed (timeout error) requests?

I’ve been testing a tool that makes an API call to GPT-4, and retries if it encounters a timeout error. For queries with a lot of tokens, timeout errors seem to happen often. But it seems that we might be getting charged for those failed requests?

Below is the output from the tool I’m testing, and then the requests shown in the OpenAI usage console:

As you can see, the number of tokens in each request seems to be identical each time. Is anyone else experiencing this?

1 Like

Just wanted to update: this was user error.

There’s a “request_timeout” parameter that I wasn’t setting. The default setting doesn’t quite give GPT-4 enough time to return a token-heavy response. So my tool would force a retry even though there was an incoming response from the API. I fixed this by setting “request_timeout” to 120 seconds.

1 Like