OpenAI rejected the request

I was using the codex-mini-latest without issue at one time days back. I try to use today in codex cli and get:

    system
    ⚠️  OpenAI rejected the request (request ID: req_74bdbb20c6fb85df5e7ffb67xxxxxxx). Error details: Status: 404, Code:
     unknown, Type: invalid_request_error, Message: 404 This model is only supported in v1/responses and not in
    v1/chat/completions.. Please verify your settings and try again.
│ localhost session: be76990e1fe2434284c7fa67xxxxxxx          │
│ ↳ workdir: ~/project-files/zip-test                          │
│ ↳ model: codex-mini-latest                                   │
│ ↳ provider: codex-mini-latest                                │
│ ↳ approval: suggest

Using codex --version: OpenAI Codex (research preview) v0.1.2505172129

What can I look for to resolve this issue? Not sure if bug or if I cannot find the correct docs to resolve.

Thanks in advance.

you answered in your own post.

Yes, I answered my own question. As I stated I do not know how to fix and cannot find documents to do so. I would not intentionally ask a foolish question merely to be ridiculed. Trying to use a classroom tutorial.

Thanks for your clear help :slight_smile:

2 Likes

What the message is referring to is the difference between the Chat Completions API endpoint that has been around for two years and the newer Responses API endpoint.

It would seem an impossible oversight that OpenAI would have made a computer coding API model exclusive to Responses, and then have the primary way to use it not be via the correct endpoint. I would update to the latest Codex CLI SDK code and see if that doesn’t solve the woes seen here.

Thank you for your assistance. Perhaps I have not made my issue clear:

“I was using the codex-mini-latest without issue at one time days back. I try to use today in codex cli and get the error”

I can no longer use “codex-mini-latest”. I can use the default o4-mini without issues. I have full access to models as we are a verified organization (school education account).

Using codex --version: OpenAI Codex (research preview) v0.1.2505172129. This seems to be the latest and shown after running npm install -g @openai/codex

i am having the same problem here… i do fully understand the v1/responses vs. v1/chat/completions… but is there a way for anyone to genuinely help instead of stating the obvious? then if this is something that must be resolve by open ai we will have to wait i guess… :eyes:"│Unable to generate explanation: 404 This model is only supported in v1/responses and not in v1/chat/completions." · Issue #1346 · openai/codex · GitHub

Yes, it is not easy to get support for Codex CLI. We totally dropped it from classroom use (campus wide). No place to go to ask questions.

As a classroom group, we asked at GitHub - openai/codex: Lightweight coding agent that runs in your terminal and the question / issue disappeared.

Shame, Codex CLI could be an excellent tool. Maybe in the future…

Just got a message from fellow teacher:

Full configuration example

Below is a comprehensive example of config.json with multiple custom providers:

{
  "model": "o4-mini",
  "provider": "openai",
  "providers": {
    "openai": {
      "name": "OpenAI",
      "baseURL": "https://api.openai.com/v1",
      "envKey": "OPENAI_API_KEY"
    },
    "azure": {
      "name": "AzureOpenAI",
      "baseURL": "https://YOUR_PROJECT_NAME.openai.azure.com/openai",
      "envKey": "AZURE_OPENAI_API_KEY"
    },
    "openrouter": {
      "name": "OpenRouter",
      "baseURL": "https://openrouter.ai/api/v1",
      "envKey": "OPENROUTER_API_KEY"
    },
    "gemini": {
      "name": "Gemini",
      "baseURL": "https://generativelanguage.googleapis.com/v1beta/openai",
      "envKey": "GEMINI_API_KEY"
    },
    "ollama": {
      "name": "Ollama",
      "baseURL": "http://localhost:11434/v1",
      "envKey": "OLLAMA_API_KEY"
    },
    "mistral": {
      "name": "Mistral",
      "baseURL": "https://api.mistral.ai/v1",
      "envKey": "MISTRAL_API_KEY"
    },
    "deepseek": {
      "name": "DeepSeek",
      "baseURL": "https://api.deepseek.com",
      "envKey": "DEEPSEEK_API_KEY"
    },
    "xai": {
      "name": "xAI",
      "baseURL": "https://api.x.ai/v1",
      "envKey": "XAI_API_KEY"
    },
    "groq": {
      "name": "Groq",
      "baseURL": "https://api.groq.com/openai/v1",
      "envKey": "GROQ_API_KEY"
    },
    "arceeai": {
      "name": "ArceeAI",
      "baseURL": "https://conductor.arcee.ai/v1",
      "envKey": "ARCEEAI_API_KEY"
    }
  },
  "history": {
    "maxSize": 1000,
    "saveHistory": true,
    "sensitivePatterns": []
  }
}

I do not recall seeing this info on GitHub. Have not tried it as removed from comptuers, but maybe the answer: GitHub - openai/codex: Lightweight coding agent that runs in your terminal