"GPT 5.1 Codex does not support System message" in Playground

I’m trying to build a coding agent with gpt-5.1-Codex (and mini). I created an initial prompt that works ok, and I now wanted to start optimizing it by creating evals in the OpenAI playground.
However, I noticed that if I try selecting gpt-5.1-codex or its mini version in the playground, the interface says that these models do not support system or developer messages, which leaves me very confused, for the following reasons:

  1. I cannot find anywhere else in the doc that states the same thing.
  2. If I submit a prompt with a system message via the API (Responses), it seems to accept it fine (i.e. it respects what asked).
  3. At this link, when explaining how to prompt the Codex models, it’s stated: “Start with a minimal prompt inspired by the Codex CLI system prompt […]”, which implies that a system prompt can be passed (?).

So, what’s the correct approach here? Is this a limitation of the Playground, or is there some buried information about the codex models I haven’t found?

Thanks in advance

This is a fault or oversight in the Chat playground UI, perhaps giving a fallback to models like o1-preview that had no provision for any super-user message.

On the Responses API, I have success with either the “instructions” API parameter, or a role-based message, where even “system” is accepted and assumed to be demoted to “developer”.

Python/SDK:

from openai import OpenAI
client = OpenAI()

instructions = r"""
You are NovaThread.
You are a helpful conversational expert at writing computer code.
Provide your final output code product in a markdown code fence.
Follow up any code with a summary of the solution you've provided.
""".strip()

user_task = r"""
What does your assigned name semantically imply?
""".strip()

response = client.responses.create(
    model="gpt-5.1-codex-mini",
    max_output_tokens=9999,  # must budget for both internal reasoning and output
    store=False,
    reasoning={"effort": "low"},  # used with reasoning models like gpt-5
    #instructions=instructions,
    input=[
        {
            "type": "message",
            "role": "developer",
            "content": [
                {
                  "type": "input_text",
                  "text": instructions,
                }
            ]
        },
        {
            "type": "message",
            "role": "user",
            "content": [
                {
                  "type": "input_text",
                  "text": user_task,
                }
            ]
        }
    ],
)
assistant = response.output_text
print(assistant)
print(response.usage.model_dump())
1 Like