Content Moderation API throwing Internal Server Errors

Hey,
around an hour ago we started to suddenly hit unexpected rate limits on the Content Moderation API.
Now it’s throwing Internal Server Errors occasionally.

Is there something wrong with the endpoint?

Request that we’re using:

        response = openai_client.moderations.create(model="omni-moderation-latest", input=json.dumps(content_data))
1 Like

https://openai.com/index/upgrading-the-moderation-api-with-our-new-multimodal-moderation-model/

Just found out that a new model was deployed and new rate limits seem to apply…

How to request more quota?

1 Like

The omni model is throwing 500 Internal Server Error.
The text model is throwing 429 Too Many Requests for no reason.

Please fix asap!

1 Like

There is no “request exception” for the model.

We are not currently accepting requests for other models

This includes GPT4 Turbo preview models. If you feel a model is missing here, you can let us know by reaching out at our help center.


The token rate limit for this moderations model are actually tiered, instead of everybody getting 150k. At top tier, four times the usage of the previous model.

All the way up to Tier-4 though, 500RPM, 20kTPM. Yes, that’s one-fifth of just the input context length of gpt-4o, per-minute. Less than tier 1 can send to gpt-4o in one call.

Beside English “rate” measured at 10%-30% extra tokens, consider all images 4800 tokens each.


Outside of failures, you can watch the “x-” headers - if they ever start working, which they haven’t for a long time and still don’t:

x-content-type-options: nosniff
x-request-id: req_853047ee45d5..
---
{'harassment': False, 'harassment_threatening': False, 'hate': False, 'hate_threatening': False, 'illicit': False, 'illicit_violent': False, 'self_harm': False, 'self_harm_instructions': False, 'self_harm_intent': False, 'sexual': False, 'sexual_minors': False, 'violence': False, 'violence_graphic': False, 'harassment/threatening': False, 'hate/threatening': False, 'illicit/violent': False, 'self-harm/intent': False, 'self-harm/instructions': False, 'self-harm': False, 'sexual/minors': False, 'violence/graphic': False}

x-content-type-options: nosniff
x-request-id: req_6de4dc697...
---
{'harassment': False, 'harassment_threatening': False, 'hate': False, 'hate_threatening': False, 'illicit': None, 'illicit_violent': None, 'self_harm': False, 'self_harm_instructions': False, 'self_harm_intent': False, 'sexual': False, 'sexual_minors': False, 'violence': False, 'violence_graphic': False, 'self-harm': False, 'sexual/minors': False, 'hate/threatening': False, 'violence/graphic': False, 'self-harm/intent': False, 'self-harm/instructions': False, 'harassment/threatening': False}

Edit: Moved to own topic
/t/moderation-api-not-working-with-project-scoped-api-key/971313


I’m having the same problem with the moderations endpoint using the omni model (omni-moderation-latest) and a project API key.

This example request results in 500:

curl .../v1/moderations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "input": [
          {
            "type": "text",
            "text": "This is my text I want to test..."
          }
     ],
     "model": "omni-moderation-latest"
  }'

And the one from the docs results in the same 500:

  curl ..../v1/moderations \
  -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "omni-moderation-latest",
    "input": "...text to classify goes here..."
  }'

Response is http status 500 and body:

{
  "error": {
    "message": "Unexpected error",
    "type": "server_error",
    "param": null,
    "code": null
  }
}

Both these calls work fine using an API key from the Default project.