GPT-5 Models Not Accessible Despite Account Verification

Issue Description

I’m unable to access GPT-5 models (gpt-5-mini, gpt-5, gpt-5.1, gpt-5.2) through the OpenAI API, despite being a verified account with active billing. I consistently receive an “Error talking to OpenAI” response.

Details

Account Status:

  • Verified in API console ✓

  • Active billing method ✓

  • Multiple API keys created and tested

What I’ve Tried:

  • Created new API keys

  • Tested multiple GPT-5 model variants (gpt-5-mini, gpt-5, gpt-5.1, gpt-5.2)

  • Verified models exist in OpenAI’s pricing documentation

  • GPT-4.1 and GPT-4.1-mini work without issues

  • Tested via Home Assistant OpenAI integration

Error Message:

Error talking to OpenAI

No additional error details provided by the integration.

Questions

  1. Is GPT-5 in limited beta access? If so, is there a waitlist or specific approval process needed?

  2. Should GPT-5 models be immediately available to verified accounts, or is there a rollout schedule?

  3. Are there specific API key permissions required to access GPT-5 models?

  4. Could this be a regional availability issue?

Environment

  • Home Assistant with OpenAI integration

  • Latest OpenAI Python library

  • API console shows GPT-5 models in pricing chart

Any guidance would be appreciated!

The error message you report is not one that comes from the OpenAI API or its library code.

It is likely that the library of “Home Assistant”, whatever that is, is not updated to correctly make calls or handle the responses from these models.

Here’s the briefest of Python code to call the GPT-5.2 model, using the model with reasoning disabled “none”, where it then can accept a top_p parameter:

import openai
print(openai.chat.completions.create(
        messages=[{"role":"user","content": "Hello, friend!"}],
        model="gpt-5.2",
        max_completion_tokens=2000,
        reasoning_effort="none",
        top_p=0.9,
    ).choices[0].message.content.strip()
)

Hello! What can I help you with today?

Anything not updated in a third-party library to understand a full set of parameters a particular reasoning model might accept could return a 404, 401, or 500 error, such as not setting the OPENAI_API_KEY as an environment variable to start with. Set “reasoning_effort” to “low” above, and you’ll get an error, for example, by not understanding the particular combination of parameters that can be sent. “Error talking to OpenAI” will never come out, however.