GPT-4: ESOCKETTIMEDOUT error when calling the api

I am getting this very strange error when I make an API call:


The prompt works fine in the playground but when I make an API call it gives me this error. Note that other API calls work fine.

I am using “gpt-4” as the model and “” as endpoint. The error occurs only when the request is composed of > 1000 tokens

1 Like

You might want to try this parameter.

unfortunately, it still doesn’t work. @logankilpatrick

How did you implement the stream feature? Works fine for me in NodeJS.

GPT-4 is kinda slow right now as I bet a lot of people are taking it through its paces.Also, longer prompts require more processing and come back slower.

Text-davinci-003 is a lot more stable and quick now, I’ve noticed…I’m sure GPT-4 will stabilize (and hopefully drop in price some…)

How long is your timeout in your code? What language are you using?

How do we configure for the endpoint? I have the gpt-4 invite, but I can only get previous engine versions such as “text-davinci-003” working.

This is what I have in my python web app currently, and it’s working.

    response = openai.Completion.create(

If I input “gpt-4” as the engine, I get the error openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

Those are very good instructions from the API error message.

If you searched for the text in that error message, it would lead you to working example API code:

You can also visit the API docs in the same web docs area:

Hope this helps.


@bill.french and @PaulBellow I am using curl call on a web app. Perhaps I know what is wrong. Is the response still a json after you add ’ “stream”: true '?