Gpt-4.1-nano seems to be down

I have this simple code that has been working for several months without any errors:

from openai import AsyncOpenAI

client = AsyncOpenAI()


async def get_product_name(description: str) -> str:
    prompt = (
        f"Create a product name given the description. *Important:* your response must be one to three words long."
        f" <BEGIN DESCRIPTION>\n{description}\n<END DESCRIPTION>\n\n"
        f"**Your response must be a one to three words long description of the product above.**")

    response = await client.responses.create(model="gpt-4.1-nano", input=prompt)

    return response.output_text

This is now producing errors 100% of the time:
openai.InternalServerError: Error code: 500 - {ā€˜error’: {ā€˜message’: ā€˜An error occurred while processing your request. You can retry your request, or contact us through our help center at ``help.openai.com`` if the error persists. Please include the request ID req_11454ab6bee9437fab7fcb2fa905972b in your message.’, ā€˜type’: ā€˜server_error’, ā€˜param’: None, ā€˜code’: ā€˜server_error’}}

I couldn’t find how to report it using the help center. How do I report an outage?

When I change the model name to ā€œgpt-4.1ā€ it works. Also ā€œgpt-5-nanoā€ works. I guess I’m switching.

Questions remain:

  1. How do I report an error using the provided request ID?
  2. Is 4.1-nano deprecated? Where Can I check model status?
3 Likes

Thanks for flagging this!
I can reproduce it with all GPT-4.1 variants when using the responses API, and I’ve pinged staff to take a look.

1 Like

Success in calling for me. Bypass OpenAI-provided library code, but in this case, AI code bloat (and then ChatGPT->5.1 told what not to do) since I’m away from a bunch of my own code examples.

import os
import asyncio
import json
import unicodedata

import httpx


def _sanitize_text(value: str | bytes) -> str:
    """
    Coerce input to a clean Unicode string safe for JSON and UTF-8 transmission.

    - Accepts str or bytes (even though the public signature uses str).
    - If bytes: try UTF-8, then cp1252 with replacement on failure.
    - Normalizes to NFC.
    - Removes control characters except for whitespace controls (\n, \r, \t).
    - Round-trips through UTF-8 with replacement to ensure valid code points only.
    """
    # Coerce bytes → str
    if isinstance(value, bytes):
        try:
            text = value.decode("utf-8")
        except UnicodeDecodeError:
            # Fallback for legacy 8-bit code pages like cp1252
            text = value.decode("cp1252", errors="replace")
    else:
        text = value

    if not isinstance(text, str):
        text = str(text)

    # Unicode normalization
    text = unicodedata.normalize("NFC", text)

    # Remove control characters except \n, \r, \t
    def _keep_char(ch: str) -> bool:
        if ch in ("\n", "\r", "\t"):
            return True
        cat = unicodedata.category(ch)
        # Categories starting with "C" are control, format, surrogate, private-use, unassigned
        return not cat.startswith("C")

    text = "".join(ch for ch in text if _keep_char(ch))

    # Force UTF-8 validity; replace any remaining invalid sequences
    text = text.encode("utf-8", errors="replace").decode("utf-8")

    return text


async def get_product_name(description: str) -> str:
    # First, sanitize the raw description (even if it happens to be bytes at runtime).
    sanitized_description = _sanitize_text(description)

    # Original prompt template, but interpolating the sanitized description.
    prompt = (
        f"Create a product name given the description. *Important:* your response must be one to three words long."
        f" <BEGIN DESCRIPTION>\n{sanitized_description}\n<END DESCRIPTION>\n\n"
        f"**Your response must be a one to three words long description of the product above.**"
    )

    # Run a second sanitizer pass over the full prompt to guard against any
    # unintended characters introduced around the description.
    prompt = _sanitize_text(prompt)

    api_key = os.environ["OPENAI_API_KEY"]  # guaranteed to exist per your setup

    headers = {
        "Authorization": f"Bearer {api_key}",
        "Content-Type": "application/json",
    }

    payload = {
        "model": "gpt-4.1-nano",
        "input": prompt,
        "max_output_tokens": 2025,
    }

    async with httpx.AsyncClient(timeout=30.0) as client:
        try:
            response = await client.post(
                "https://api.openai.com/v1/responses",
                headers=headers,
                json=payload,
            )
            response.raise_for_status()
        except httpx.HTTPStatusError as exc:
            # Print raw body for debugging any 4xx/5xx issues
            print("OpenAI API error body:")
            try:
                print(exc.response.text)
            except Exception:
                print("<unable to read error body>")
            raise

    data = response.json()

    # Parse the Responses API "output" array:
    # Keep only elements of type "message" or "refusal".
    # From each, collect "output_text" entries from "content".
    output = data.get("output") or []
    collected: list[str] = []

    for item in output:
        item_type = item.get("type")
        if item_type not in {"message", "refusal"}:
            continue

        contents = item.get("content") or []
        for block in contents:
            if block.get("type") == "output_text":
                text = block.get("text") or ""
                if text:
                    collected.append(text)

    result = "".join(collected).strip()

    if not result:
        # Defensive fallback: show some context for debugging
        raise RuntimeError(
            f"No output_text segments found in response: {json.dumps(data, ensure_ascii=False)[:2000]}"
        )

    return result


if __name__ == "__main__":
    async def _demo() -> None:
        doc = "A compact, wireless, noise-cancelling pair of travel headphones."
        name = await get_product_name(doc)
        print("Suggested product name:", name)

    asyncio.run(_demo())

Output by the demo string passed:

Suggested product name: TravelSilence
or
Suggested product name: TravelSphere

My first thought was that the API would be adding a bad field to the shape, that you never see echoed because of status 500. Or that there was glitchy input ā€œthereā€ but unseen. But the two parameters and prompt input run fine when casting to UTF-8 is ensured and OpenAI’s SDK library code-of-the-day is not used.

1 Like

Hi everyone, apologies for the delay here. I haven't been able to reproduce this issues, so please let me know if you're still able to reproduce this. Thank you!

1 Like

The model being down error is gone. BUT the first and more important issue that took me more time to resolve was:

  1. How do I report an error using the provided request ID?

The provided instruction in the error message does not work:
```
contact us through our help center at ``help.openai.com`` if the error persists. Please include the request ID req_11454ab6bee9437fab7fcb2fa905972b in your message.’
```

1 Like

Hi @yasonk!

Thanks for the great feedback!
In the screenshot below you can see how I would handle it, but of course that’s not ideal for urgent issues.

The Developer Community Forum is, in my view, the best place to surface technical issues with the API and related services.

In this case I saw your post, reproduced the issue, and flagged it to staff. After keeping an eye on it, I noticed the issue had cleared up again. I should have taken a moment to post an update here.
Thanks for raising this!

2 Likes

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.