Sometimes what you need is not where you need it.
By default, GPT-5 in the API does not format its final answers in Markdown, in order to preserve maximum compatibility with developers whose applications may not support Markdown rendering. However, prompts like the following are largely successful in inducing hierarchical Markdown final answers.
- Use Markdown **only where semantically correct** (e.g., `inline code`, ```code fences```, lists, tables).- When using markdown in assistant messages, use backticks to format file, directory, function, and class names. Use \( and \) for inline math, \[ and \] for block math.
Occasionally, adherence to Markdown instructions specified in the system prompt can degrade over the course of a long conversation. In the event that you experience this, we’ve seen consistent adherence from appending a Markdown instruction every 3-5 user messages.
One of the first things to do when an new LLM model is published is to find and read all of the newly published documentation even if it does not seem relevant to what you need.
Other points of note related to the API
We provide a
reasoning_effortparameter to control how hard the model thinks and how willingly it calls tools; the default ismedium, but you should scale up or down depending on the difficulty of your task.
In GPT-5 we introduce a new API parameter called verbosity, which influences the length of the model’s final answer, as opposed to the length of its thinking.