Critical bug using Flex model for GPT5 since last Monday - 500 Internal server error

Hi,

We use a series of a dozen prompts over GPT-5 with Flex option.

SInce last Monday, the prompt stopped working on about prompt number 8, with the error:

server_error, 500 internal server error.

The strange thing is, it always produces the error at the same prompt in the sequence. it doesnt matter if we run it in parallel 3 times or with any other option.

Message ID for OpenAI support: req_8ea70edbcedf45c68ce13b1da60e73c7

Any help is appreciated.

2 Likes

Had the same issue, fixed it by removing the “service_tier: flex” param from the API call.

Seems like it’s really buggy at the moment.

Yes today it was down again…

1 Like

Hey everyone, Can someone please confirm if you are still facing this issue. If yes, can someone please share the request id with us. Thank you!

Hello, here is a request ID for one Internal Server Error that happened just now:

STATUS DATA:
{
“ExceptionType”: “PostOpenAiRequestException”,
“ExceptionMessage”: “Request to https://api.openai.com/v1/responses failed with status code InternalServerError (Internal Server Error). Response: {\n \“error\”: {\n \“message\”: \“An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_553fc3fe089843dd98c320c1279b4572 in your message.\”,\n \“type\”: \“server_error\”,\n \“param\”: null,\n \“code\”: \“server_error\”\n }\n} → Request to https://api.openai.com/v1/responses failed with status code InternalServerError (Internal Server Error). Response: {\n \“error\”: {\n \“message\”: \“An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_553fc3fe089843dd98c320c1279b4572 in your message.\”,\n \“type\”: \“server_error\”,\n \“param\”: null,\n \“code\”: \“server_error\”\n }\n}”
}
1 Like

Confirmed

While the 500 is reported as prompt-dependent on “gpt-5”, there is nothing special to this consistent failure I experience:

gpt-5-nano + “flex” = Status 500

This 500 error was either slow or fast in being received, but a “flex” call has not succeeded.

FAIL! Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_71eb8b41991943509689cad287fcce6d in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}

or

req_f00cfb5e95d94a189126ada790fc5735
req_98003ca43ba64ee8a0bd8bf5229af7bf
req_66a9f0b4ba684214854363ff180ce426
req_052e169ade8948a1a3eca50a73ca3eb1
req_7dc85e58f3544838a28e386ddbd7424a

Replication

Python, SDK, run against all “flex” models:

'''Responses API - test "flex" service_tier against models'''
from openai import OpenAI

client = OpenAI(timeout=120, max_retries=0)
models = ["gpt-5", "gpt-5-mini", "gpt-5.1", "o3", "o4-mini", "gpt-5-nano"]
models = ["gpt-5-nano", "gpt-5-nano", "gpt-5-nano", "gpt-5-nano", ]
input_messages=[
    {
        "type": "message",
        "role": "developer",
        "content": [
            {
              "type": "input_text",
              "text": "You are a direct and brief chat partner.",
            }
        ]
    },
    {
        "type": "message",
        "role": "user",
        "content": [
            {
              "type": "input_text",
              "text": "Hi! What is your name and location?",
            }
        ]
    }
]
for model in models:
    response=None
    try:
        response = client.responses.with_raw_response.create(
            model=model, input=input_messages,
            max_output_tokens=3456, store=False,
            reasoning={"effort": "low"},
            service_tier="flex"  # parameter in question
        )
        print(f"{model} Request: {response.headers.get("x-request-id")}")
        print(f"Tier:{response.parse().service_tier}\n{response.parse().output_text}")
        #print(response.parse().usage.model_dump())
    except Exception as e:
        print(f"{model} FAIL! {e}")
        try:
            print(f"Request: {response.headers.get("x-request-id")}")
        except:
            pass

Results

gpt-5-nano FAIL! Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_90d81c89e6d54477b4a6f2ee908e1cfc in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}
gpt-5-nano FAIL! Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_f816161745854575852b6999bd39d9d4 in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}
gpt-5-nano FAIL! Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_46be3d1ee8d74a8abb749384a6f5d9a0 in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}
gpt-5-nano FAIL! Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_7d5d1c00d6594f2ea650d16ce7db1a7a in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}

Sanity check:

  • other flex models - all successful
  • “priority”: gpt-5-nano is not delivering priority service. This matches the “priority” pricing table missing this model:
gpt-5-nano Request: req_aa34560b5d0c4b9aad025a3dec5a1177
Tier:default
I’m ChatGPT. I don’t have a physical location—I'm powered by servers in the cloud. How can I help you today?

Supported models include nano, via pricing table

Experienced this issue just now:

Request to https://api.openai.com/v1/responses failed with status code InternalServerError (Internal Server Error). Response: {
"error": {
"message": "An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_0762a51c39354331835f09c8ca138ac1 in your message.",
"type": "server_error",
"param": null,
"code": "server_error"
}
}

Some request IDs: req_0762a51c39354331835f09c8ca138ac1, req_58834455ce244b96afd52580a68ee670, req_b184eee91e284aa7a36e3c6ec436587b