Let’s not lock threads where people discuss the downtime… this thread became a discussion board for the unreasonably high amount of downtime experienced by the API endpoints. There’s been an outage on the API endpoints 23% of the last 90 days… We’re just frustrated that OpenAI is clearly scaling at a rate beyond their capability which is coming at the expense of existing customer’s uptime.
Don’t worry, the API is plenty slow/problematic recently for everyone else too
It’s experimental and beta. I know that seems like an excuse, but it really is experimental and beta.
They really aren’t kidding. It’s not just boilerplate.
You’re paying $ to be able to participate.
That said! They really should be more up front about how flaky the endpoint is. Like put the downtime average bold face on the account management page.
I’ve said it before. OpenAI is not a product or service company. They are a research company, and you should treat them as such. The billing limit alone is the biggest proof of that. If you want highly scalable services and stable usage, use Azure. That’s why they partnered with them. You should be using OpenAI for the cutting-edge experimentation and to help out with the research efforts and Azure to build services. At some point OpenAI will probably move to a more service-oriented model but that is definitely not now.
Just out of curiosity, does anyone know the uptime guarantee on Azure Cognitive services? And does the SLA cover their GPT4 offering?
What are the SLAs for API responses in Azure OpenAI?
We don’t have a defined API response time Service Level Agreement (SLA) at this time. The overall SLA for Azure OpenAI Service is the same as for other Azure Cognitive Services. For more information, see the Cognitive Services section of the Service Level Agreements (SLA) for Online Services page.
Not sure what that means. No API response time?