Assistants v2 API still returning 429 “Rate Limit Exceeded” since Oct 7 — not resolved

Hi everyone,

@OpenAI_Support I’m reopening this discussion because the Assistants v2 API 429 issue marked by community/support as resolved on Oct 7 still persists for my organization.

Since Oct 7 (≈12:00 UTC), every request to the Assistants v2 endpoints instantly returns:

{
  "error": {
    "message": "You've exceeded the rate limit, please slow down and try again later.",
    "type": "invalid_request_error",
    "code": "rate_limit_exceeded"
  }
}

even after hours of inactivity.

:puzzle_piece: Environment

  • Organization ID: org-0WYw7rG1ZZogE3m4wuKjps70

  • Project ID: proj_lOA2KbEo9QLis0pSgnXtsj34

  • Billing: Paid Tier 3 (Pay-as-you-go, auto-recharge on, positive credit balance)

  • Usage: Far below limits (~7 M tokens, 2 239 requests)

  • Endpoints affected:

    • POST /v1/threads/<id>/messages

    • POST /v1/threads/<id>/runs

    • POST /v1/threads/runs

  • Endpoints working normally:

    • /v1/chat/completions

    • /v1/responses

    • /v1/threads creation

Occurs on all models (gpt-4o, gpt-4.1-mini, gpt-3.5-turbo) and across all clients:

  • Direct curl

  • Make . com (HTTP POST + ChatGPT modules)

  • OpenAI Playground (Assistants tab)

:light_bulb: Already verified

:white_check_mark: Billing active and Tier 3 confirmed
:white_check_mark: Proper headers / syntax
:white_check_mark: Tested multiple API keys (project + org-level)
:white_check_mark: Reproduces 100 % even after 1 hour delay
:white_check_mark: Not exceeding any RPM/TPM limits
:white_check_mark: Reproduces inside Playground → server-side issue

:warning: Impact

All API requests, curl calls, and Assistant web interface interactions fail with 429s for several days, despite the Oct 7 incident being marked resolved.

:toolbox: What I’ve done

:sos_button: Request

Could OpenAI staff @OpenAI_Support @OpenAIAPIhelper please re-check whether Assistants v2 still has a backend throttle or stuck limiter on certain projects?

If other users still experience the same post-Oct 7 behavior, please comment below to help confirm scope.

4 Likes

Still getting 429 on all Assistants v2 endpoints since Oct 7 “resolved” notice - affects API + Playground.

@OpenAI_Support @OpenAIAPIhelper please re-check this, it blocks production use.

I’m a premium (paid Tier 3) user - this level of silence and downtime is frustrating and unacceptable.

2 Likes

We are experiencing the same issue since yesterday. We have reported this problem, but it has not been resolved yet. This is affecting our production services. Any update on the fix would be appreciated.

1 Like

Same here - still getting 429s on all Assistants v2 endpoints even after days of inactivity.
I also see errors directly in the Playground dashboard, so it’s clearly not an API-side usage issue.
This has been ongoing since Oct 7 and still no response from support - definitely looks like a backend throttle or stuck limiter. Any update from OpenAI staff @OpenAI_Support @OpenAIAPIhelper would be appreciated.

This would be an API organization issue - the platform site UI is nothing more than an interface to your own organization.

The likely fault here is the API call count limit, which was never made adequate enough to deploy Assistants as a large multi-user product, with limits like 200 thread updates per minute regardless of tier.

Is there going to be some ‘fix the limit’ button for a contractor to press (even if OpenAI support wasn’t bots sending back default denials of responsibility)? No. It is likely some corruption error about the organization, like the API rate limiter database having a 2**32 stored in it from some overflow, or an index pointing to garbage bytes.

The only facility that you’d have to try to improve the situation is to create a new project, never touching any limit controls in the project for its new keys, and see if that results in success. To try it in the platform site, then switch at the upper-left to the new project.

"OpenAI infrastructure staff support needed to repair damaged Assistants API call limit backend organization database" or similar is the action you’ll need to demand. Or demand a new organization ID filled with the existing credits, ID verification, and more.

3 Likes

Huge thanks, @_j - your insight was spot-on.

I spun up a new organization (not just a new project), recreated my assistants there, remapped IDs/keys, and it’s working. So this is a workaround, not a fix, and it’s a lot of manual hassle - but it unblocked me.

For anyone else hitting instant 429s on Assistants v2 (even in Playground) despite low usage:
Workaround: create a new org, recreate assistants, and remap API keys/IDs. New projects alone didn’t help.

FWIW, OpenAI support kept sending the same template troubleshooting replies and wouldn’t escalate to a human - even though we’re paid users. Frustrating.

Also worth noting: Assistants are slated to be replaced - so you’ll need to migrate to the Responses API by Aug ’26 anyway. So I guess support thinks - “why bother if we roll this out”, ugh…

2 Likes

Thanks for such a thorough report @architeg, and thanks for the unblock @_j!

Looking into this, will get back here once we know more.

3 Likes

Update:
Got another “response” from OpenAI chat support - basically a copy-paste checklist telling me to “check my internet connection”. :man_facepalming:

Appreciate the follow-up here though - thank you for finally looking into it. Will wait for your findings.

Are you able to share your Request ID? Feel free to share it in your existing Support thread and I should be able to track it down. Thank you!

1 Like

Thank you for looking into this! My Request ID is req_1fd167b9ac78c564259b625c3ca3eb19. I’ve shared it in the support thread as well. Looking forward to your findings.

1 Like

Thanks for following up - I just pulled a Request ID from one of the failing Playground calls:

req_d4683090ae7c61a16e3cee1b27b16f8f

I’ve also shared it in the support chat so they can trace it internally. Appreciate you taking the time to look into this!

1 Like

Hmm we weren't able to find requests associated with those req_ids (likely due to our search time window being too wide). Would you all mind sharing new req_ids + timestamps? Thank you!

The issue seemed to resolve itself around October 24. Although it’s now working, I’d still like to understand the root cause in case it happens again.

The most recent rate_limit error occurred with request ID req_1fd167b9ac78c564259b625c3ca3eb19 at Thu, 23 Oct 2025 19:32:54 GMT.

Additionally, our server logs showed that when the rate_limit_exceeded error occurred, the status, headers, and requestID fields were sometimes undefined, as shown below:

{
  status: undefined,
  headers: undefined,
  requestID: undefined,
  error: {
    message: “You’ve exceeded the rate limit, please slow down and try again later.”,
    type: “invalid_request_error”,
    param: null,
    code: “rate_limit_exceeded”
  },
  code: “rate_limit_exceeded”,
  param: null,
  type: “invalid_request_error”
}

Hi,

I don’t have any more fresh request IDs from that org - because I stopped using that throttled organization entirely after I created a new organization and rebuilt everything there.

The last failing request ID I captured was:

req_d4683090ae7c61a16e3cee1b27b16f8f

and that was from around Oct 24 (that’s the last day I tested anything in the broken org).

If you need a new fresh request ID then I will have to temporarily switch back to the broken org, trigger another failing Playground request, and pull a new req_id - but please confirm that this will actually be looked up immediately once I send it, before I go through that setup again.

Thanks

Btw, it’s understandable that you couldn’t find anything associated with that older request ID. This is actually consistent with the exact failure mode we’re likely dealing with here:

org-level limiter corruption - where the limiter short-circuits the request before full request metadata is committed into the logging system.

So the request ID gets generated, but the backend never stores all the fields that would later make it searchable. That also explains why you weren’t able to locate the other user’s request ID either - same condition, same missing trace problem.