Did limits for gpt-4-1106-preview suddenly drop today?

Definitely a nasty move from OpenAI if they suddenly dropped the limits back down after having them so high. My service is dead in the water at the moment.


Appears I’m not the only one, altho I’m surprised more people aren’t howling right now. Maybe only some accounts were affected?


I was wondering what happened. I hit my limit before lunch and was very confused.

We are affected as well. We started getting rate limit errors around 2:30pm Pacific time.

We were affected around the same time today as well. 200 RPD limit out of nowhere.

Latency spikes for gpt-3.5-turbo-1106 have been unusable, so we’ve basically had to fall all the way back to gpt-3.5-turbo-16k-0613.

To be fair it’s a preview model. Not meant for production. Looks like they are completely over capacity.

1 Like

In my case, your rate limits are higher than mine except for gpt-4-1106-preview RPD. i have 10,000 RPD but lower TPM and same RPM.


Same here.

Same here. The daily limit is also placed below the per minute level. The documentation doesn’t reflect this.

Is this a bug, a screwup? It’s very disconcerting, particularly given that we just switched from Azure. Rug pulls on paid services are very unnerving and break trust.

totally understand this, but still feels sketchy to raise the limits and then pull them back again

can you tell me any reason why on earth you would switch from Azure?

Because Azure took over a month to add function calls when the last big update happened. There’s a lot of pressure to keep up with the latest, and Azure has been too slow.

I still have the code to run on Azure, so I can make the switch. I’m not yet committed to Assistants and Threads, so it’s doable.

Hi folks – this was completely unintentional and we’re rolling out a fix asap. Sincere apologies for the trouble caused here.


Thanks for stopping by to let us know. As a community, we appreciate it.

Keep up the great work!


Oh thank goodness. Mistakes happen. Thanks for letting us know!

1 Like

Thank goodness. Thanks for the clarification.

The discussion revolves around a sudden drop in the limits for the gpt-4-1106-preview model from OpenAI, which caused issues for several users. Many were caught off-guard, with their services affected or completely halted, experiencing rate limit errors and latency spikes. Users were concerned due to the lack of updates in the documentation and felt deceived by the sudden change in service. Some were considering moving back to other platforms or trying other models. User RonaldGRuckus justified that the model was a preview, hence may not be designed for production purposes. A representative from OpenAI, Nikunj, intervened, confirming it was an unintentional error and a fix was being rolled. The community expressed relief and gratitude for the clarification and prompt action from OpenAI. ref Summarized with AI on Nov 15

1 Like

Thank you for letting us know! Glad I can keep playing :slight_smile: