GPT-3.5-turbo error 404: not supported in the v1/chat/completions

This is what I copied from OpenAI:

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   },

Then I call it in postman and report this error:

{
    "error": {
        "message": "This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?",
        "type": "invalid_request_error",
        "param": "model",
        "code": null
    }
}

Is any mistake here?

Encountering the same issue, is there any update on this?

Are you using an application or command that was working before?

Works for me at this moment, specifying “gpt-3.5-turbo”:

“model”: “gpt-3.5-turbo-0613”,
“choices”: [
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “My name is CrazyBot. How can I assist you today?”
},
“finish_reason”: “stop”
}
],

The retry techniques for rate limiting outlined below can also be used for other error codes and messages that would indicate an intermittent service problem:

https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb