The URL does exist and responds.
It seems the error message includes two single-quotes - have you manually specified the URL in a way that is different than the module needs or uses by default?
Were one using the wrong endpoint for the model, you’d instead get a return message like: “OpenAI API request was invalid: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?” when you have proper error handling, such as a case “except openai.error.InvalidRequestError as e:” for the openai python library.