We have been testing ChatGPT 5.2 Pro Xhigh via the API and are seeing unusually long delays before receiving any response at all.
Often they delays are 7–10 minutes before the model even begins its reasoning or token generation.
What is odd is that this delay doesn’t appear to be “thinking” time. There is simply no response or progress during that period.
Once the response starts, things seem normal.
We wanted to ask:
-
Is this behavior expected right now (e.g., due to capacity constraints or ramp-up)?
-
Is this specific to Pro Xhigh, or something we should expect across tiers?
-
Are there any recommended mitigations or configuration changes on our end?
We understand infrastructure may still be scaling, so mainly want to confirm whether this is normal before investing time in workarounds.