Question on API response delays for ChatGPT 5.2 Pro Xhigh

We have been testing ChatGPT 5.2 Pro Xhigh via the API and are seeing unusually long delays before receiving any response at all.

Often they delays are 7–10 minutes before the model even begins its reasoning or token generation.

What is odd is that this delay doesn’t appear to be “thinking” time. There is simply no response or progress during that period.

Once the response starts, things seem normal.

We wanted to ask:

  • Is this behavior expected right now (e.g., due to capacity constraints or ramp-up)?

  • Is this specific to Pro Xhigh, or something we should expect across tiers?

  • Are there any recommended mitigations or configuration changes on our end?

We understand infrastructure may still be scaling, so mainly want to confirm whether this is normal before investing time in workarounds.

1 Like

I’ve seen this with “Pro” in general. A simple request that should be quick, (like “ping”, receive back “pong”) and doesn’t generate many tokens out takes a minute instead of seconds. Just churns with nothing streamed. O1 pro especially has huge latency.

Maybe just “they’ll think it worked worth 12x the cost if they have to wait”…