This seems to be related to the status update yesterday regarding authentication issues and a cashing service having capacity issues, this has been addressed partially but there is still working being carried out to permanently fix the issue.

This seems to be related to the status update yesterday regarding authentication issues and a cashing service having capacity issues, this has been addressed partially but there is still working being carried out to permanently fix the issue.

That incident is resolved according to that status page. Why do you think there’s still work to be done?

The last time I’d checked the status it was still in the monitoring phase, it now says resolved, but I don’t think it is, there are still 500 errors and similar issues, perhaps this will clear up as the fixes propagate the network.

Sjeez, OpenAI support is also pathetic. 2 times I provided them with my video and a link to this community topic. But I get semi-automated responses back thinking it’s a problem at my end and then they close the issue.

Reddit also has some topics about performance degradation. But somehow, OpenAI does not see it or is willing to atleast give a response back they are looking into it.

What a company…

Switching over to Azure, which is way faster anyways.

2 Likes

still experience slowness. I am using api gpt3.5 turbo 16k. Are you guys fixing or leave it is?

They never do. There are so many opportunities to learn and test edge cases here but they don’t seem to bother until it expands and explodes. I have seen over time influxes of complaints that are never addressed, and sometimes just dismissed with an obvious lack of care (on their twitter of course, their apparent safe-space, maybe posting these things here could be considered “official”)

Their support is completely overwhelmed.

The company is now managing so many different branches and products that all interact together. I truly can’t fathom how hard each employee is working there just to satisfy the demands of this AI race.

Safe bet for production.

No, an authentication server binary issue that affected everyone for a few hours has nothing to do with a week-long move of some accounts to gpt-3.5-turbo with 1/4 the token production rate.

Do you just use their cloud to host open-source llms? Or is there some openai-like offer?

I’m mostly concerned with long contexts - open-source models have solutions for it, but it’s a bit of a hassle to make it work well. I’d really appreciate more details on what you decided to use instead of openai :eyes:

Azure has “OpenAI services”, where they offer the same models and similar services at the same price. One must get their application for their app/use-case approved.

Just Azure’s OpenAI offerings. They also have 3.5 Turbo, which is ridiculously fast.

1 Like

We are seeing varying speeds right now across different types of accounts from Enterprise to regular orgs to personal and you almost have to make sure you have a dedicated account for each customer.

My responses have been getting slower and slower by the day. Is this getting fixed are am i being throttled for a reason?

Just wanted to add my voice to the choir. Same issue, extremely slow gpt3.5 turbo calls since October 10th. I also tried registering a new account (my old one is from June), but no difference between the two.

Response times are still incredible slow on our end, averaging 18 seconds for a very simple prompt.

The gpt-3.5-turbo-instruct model remains the faster solution for now, but is marked as “Legacy” in the API docs. Seems risky to use in production long term.

Has anyone found a solution to this issue?

That’s the point! No one is talking about what “legacy” means, which I guess it’s NOT deprecated… I hope it becomes more like a LTS endpoint… (fingers crossed!)

gpt-3.5-turbo-instruct was released in September and is not legacy. The completion endpoint will continue; OpenAI had just added a legacy flag for completions mode in the playground before new ones had been released, because all previous models there are slated for the axe January 2024.

There is no solution it seems if you account has been assigned to interact with slow turbo chat models.

I come back to this thread bearing good news regarding GPT 3.5 performance.

You all know about Tiers 1 and 2, right? As an experiment, I’ve added $50 to my 12-day-old account to become tier 2.

Like this image:
50usd

And you know what? GPT 3.5 speed has become normal.

This account has indeed become Tier-2 because I can add up to $250 now.

And my slow accounts are still slow as hell in the playground.

I don’t know how the old ‘pay-later’ accounts can become Tier 2. I think those accounts are out of luck for now because OpenAI is stonewalling about this issue.

Maybe the old ‘pay-later’ account will become Tier 2 if I make it a pay-in-advance account?

Experiment at your own risk. I can’t guarantee your success. I’m discussing only what I saw.

1 Like

Thanks for sharing your experience.

But it’s still a little weird to me. I’m a paying customer of the API for months, attached my credit card and have zero issues on payments. But yet, the organization that should be in a higher tier, is receiving the low performance :sweat_smile:

I will also reply that many of my users who bring their own key are also seeing abnormal slowness. It’s been going on for about 2-3 weeks and it’s a magnitude of about 3x slower.

Same here. Had a bad experience presenting my product to the client this morning.