Gpt-5-mini returns empty output text on certain images

I’m having gpt-5-mini analyze images using a structured output with JSON schema.

A small number of images (less than 0.1%) reliably cause gpt-5-mini to return an empty output text (response.output_text and response.output[0].content[0].text both empty).

I tested one of these images dozens of times getting the same empty result each time. The image loads in a viewer and looks ok. ChatGPT processes it with no issue.

Is this a known issue?

Your output might not be in “output” at all.

Check another field (I cannot produce a link to the API reference to get you anywhere near the response object key)

I can get you line 98500+ of “get markdown”, and then manually edit the broken list items within.

  • not an indent block

    not an indent code block
    still broken markdown

    • cannot fix the API reference nohow
      - error: ResponseError
      An error object returned when the model fails to generate a Response.
    
      - `code: "server_error" or "rate_limit_exceeded" or "invalid_prompt" or 15 more`
    
        The error code for the response.
    
        - `"server_error"`
        - `"rate_limit_exceeded"`
        - `"invalid_prompt"`
        - `"vector_store_timeout"`
        - `"invalid_image"`
        - `"invalid_image_format"`
        - `"invalid_base64_image"`
        - `"invalid_image_url"`
        - `"image_too_large"`
        - `"image_too_small"`
        - `"image_parse_error"`
        - `"image_content_policy_violation"`
        - `"invalid_image_mode"`
        - `"image_file_too_large"`
        - `"unsupported_image_media_type"`
        - `"empty_image_file"`
        - `"failed_to_download_image"`
        - `"image_file_not_found"`
    
      - `message: string`
    
        A human-readable description of the error.
    
    - `incomplete_details: object { reason }`
    
      Details about why the response is incomplete.
      - `reason: optional "max_output_tokens" or "content_filter"`
    
        The reason why the response is incomplete.
        - `"max_output_tokens"`
        - `"content_filter"`
    

Thanks - I forgot to mention one other fallback I use for salvaging the text output in this case, where an exception isn’t triggered but the output text is incomplete:

json.dumps(resp.model_dump()) if hasattr(resp, “model_dump”) else str(resp)

…where resp is the response object.