I’m chatgpt Plus subscriber, but also Tier 3 API user
I’ve tried to use codex in multiple ways:
using --free option
providing my own api key
log in witch chatgpt account
Every attempt ends up with same error message
The model “codex-mini-latest” does not appear in the list of models available to your account. Double-check the spelling (use
openai models list
to see the full list) or choose another model with the --model flag.
How to enable model for my account ? Are they Country restricted (I’m in EU)
This model is currently only available to Pro/Enterprise/Team subscribers I believe. You need to use the new --login feature which will allow you to use your OpenAI sign-in. Then you grant access to your project, and if the model is available, you can use it in Codex.
Observe that you have organization access to the API model, using your existing Python OpenAI environment, in this example code I produced to check the models endpoint for you:
import asyncio
from openai import AsyncOpenAI
async def fetch_filtered_models(keywords):
def contains_keyword(model_name, keyword_items):
return any(kw in model_name for kw in keyword_items)
try:
client = AsyncOpenAI()
model_obj = await client.models.list()
except Exception as err:
print(f"\n[Error] Model listing API call failed: {err}")
return None
model_dict = model_obj.model_dump().get("data", [])
model_list = sorted([model["id"] for model in model_dict])
filtered_models = {
model for model in model_list if contains_keyword(model, keywords)
}
return list(filtered_models)
async def nab_models():
match_keywords = ["computer", "code"]
print("Getting organization models only matching keywords")
filtered_models = await fetch_filtered_models(match_keywords)
if filtered_models:
for model in filtered_models:
print(model)
else:
print("error: no matching models returned")
if __name__ == "__main__":
asyncio.run(nab_models())
output:
Getting organization models only matching keywords
computer-use-preview
codex-mini-latest
computer-use-preview-2025-03-11
Then make a test call to the model, requiring the Responses endpoint, no sampling parameters, and parsing past the reasoning summaries also returned:
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="codex-mini-latest",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": "You are a computer programmer's assistant code-writer."
}
]
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "Produce a one-paragragh introduction to assistant"
}
]
}
],
max_output_tokens=3456, # needs reasoning budget
store=False
)
for item in response.output:
if not hasattr(item, 'content') or item.content is None:
continue
for entry in item.content:
print(entry.text)
>>OpenAI Assistant is an AI-powered coding partner designed to help developers write,
Then you can proceed to the Agent Codex SDK if that is your target, updating all libraries, as it was updated as recently as two days ago. Follow the model configuration file instructions.
sorry for lack of response,
I solved the problem by removing project (which was created long time ago when gpt-3.5 was top model) and creating new one, I guess some old projects are not upgraded correctly and do not have new models available