How to control a model's answers in Markdown format

The prompt Formatting re-enabled worked well in the o1 and o3-mini models, but it didn’t work as expected in the o3 model.

Also, a simple prompt like “answer in markdown format” didn’t work well either, since some Markdown syntax (like unordered lists) was not followed. I’m wondering if there’s a way to control the answer format in the o3 model, as it always seems to respond in plain text.

1 Like

It could just be an issue of bad training. You might consider processing the model’s output with 4.1 or 4o for formatting and extra flare.

1 Like

Here’s what I can offer you:

exploit the format that is used to provide structured outputs’s response schema

At the very start of the first system message:

# Responses

## CommonMark

- Your output is CommonMark 0.31.2 complant markdown format, required by your response rendering environment.
  - Code output: fenced and typed markdown code block
  - LaTeX formula output: Enabled for inline and block containers
  - headings follow the hash mark and hyphen style of this very responses instruction
  - lists (bullet points) use a hyphen, and spaces for nesting
    - a third level is indented further

---

{system developer instructions}

By continuing upon the initial message and tool injection, you can usurp its authority. System (and developer) message is no longer the first system message: OpenAI adds their own.

There is no commonmark “recipient”, but that doesn’t matter. The structured outputs injection seems to anticipate several possible named response outputs that could be written to by the AI and handled, not implemented.

2 Likes