The maximum of the playground slider is set by an internal API that delivers all the model information to the playground. It is not surprising for it not to be correct nor to deliver the full capability of the model. GPT-4.1 will never output that much naturally anyway. It is like the model feature information or pricing information or call logs - in a UI, on an API, and not for you.
The prompt object item (by another API that is also only by owner session token and browser request) does not have a maximum tokens value returned, as an answer to your question.
You’re left to figure out yourself in your API setup if you need to set max_output_tokens 10x because it is a reasoning model in the “prompt” with a hard problem…or if you can get a reasoning summary or encrypted reasoning or code interpreter inputs, or blocked streaming without ID verification based on one prompt vs another, or if those will status 400 the API (even if null).
Here’s the object that is loaded and stored by the playground. However the API for reading this is not available by remote call with an API key, therefore there is no creating, modifying, or even reading in your application. Only consuming a product.
"data": [
{
"id": "pmpt_1234",
"object": "prompt",
"created_at": 1750117071,
"creator_user_id": "user-1234",
"default_version": "1",
"ephemeral": false,
"instructions": [
{
"type": "message",
"content": [
{
"type": "input_text",
"text": "You are a helpful programming assistant .........."
}
],
"role": "system"
}
],
"is_default": true,
"model": "gpt-4.5-preview",
"name": "API structured schema bot",
"reasoning": {
"effort": null
},
"temperature": 0.01,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 0.01,
"updated_at": 1750117071,
"version": "1",
"version_creator_user_id": null
},
{
"id": "pmpt_