GPT 5 returning empty reasoning and no content output with stored prompt

GPT 5 returning empty reasoning and no content output with stored prompt.
changing only the model on the stored prompt to 4.1 gives expected content

I’m making php curl calls to responses using a stored prompt.

here is my code:

private function buildRequestData(): array

{

    *return* \[

        'prompt' => \[

            'id'       => self::PROMPT_ID,

            'version'  => self::PROMPT_VERSION,

            'variables' => \[

                'part_name'    => $this->partNumber,

                'manufacturer' => $this->manufacturer,

            \],

        \],

    \];

}



*/\*\**

 *\* Send HTTP POST to OpenAI Responses API using plain PHP cURL.*

 *\**

 *\** @param array *$data*

 *\** @return array

 *\** @throws \\*InvalidArgumentException*|\\*RuntimeException*

 *\*/*

*private* function *sendApiRequest*(array $data): array

{

    $payload = json_encode($data, JSON_UNESCAPED_SLASHES);

    *if* ($payload === false) {

        *throw* new \\InvalidArgumentException('Failed to encode request payload to JSON: ' . json_last_error_msg());

    }



    $ch = curl_init(self::API_URL);

    *if* ($ch === false) {

        *throw* new \\RuntimeException('Unable to initialize cURL');

    }



    $headers = \[

        'Content-Type: application/json',

        'Authorization: Bearer ' . $this->apiKey,

    \];



    $options = \[

        CURLOPT_POST            => true,

        CURLOPT_HTTPHEADER      => $headers,

        CURLOPT_POSTFIELDS      => $payload,

        CURLOPT_RETURNTRANSFER  => true,

        CURLOPT_TIMEOUT         => self::TIMEOUT_SECONDS,

        CURLOPT_CONNECTTIMEOUT  => 15,

        CURLOPT_FOLLOWLOCATION  => true,

        CURLOPT_MAXREDIRS       => 3,

        CURLOPT_ENCODING        => '',

    \];



    *if* (!curl_setopt_array($ch, $options)) {

        curl_close($ch);

        *throw* new \\RuntimeException('Failed to set cURL options');

    }



    $rawResponse = curl_exec($ch);

    $curlErrorNo = curl_errno($ch);

    $curlError   = curl_error($ch);

    $httpStatus  = (int) curl_getinfo($ch, CURLINFO_HTTP_CODE);

    curl_close($ch);



    *if* ($curlErrorNo !== 0) {

        *throw* new \\RuntimeException('cURL error (' . $curlErrorNo . '): ' . $curlError);

    }



    *if* ($rawResponse === false) {

        *throw* new \\RuntimeException('Request failed with an unknown cURL error');

    }



    *if* ($httpStatus >= 400) {

        $errorDetails = '';

        $decodedError = json_decode($rawResponse, true);

        *if* (is_array($decodedError) && isset($decodedError\['error'\]\['message'\])) {

            $errorDetails = $decodedError\['error'\]\['message'\];

        }



        $snippet = substr($rawResponse, 0, 500);

        *throw* new \\RuntimeException(

            'OpenAI Responses API error (HTTP ' . $httpStatus . '): ' .

            ($errorDetails !== '' ? $errorDetails : 'Request failed. Response: ' . $snippet)

        );

    }



    $json = json_decode($rawResponse, true);

    *if* (json_last_error() !== JSON_ERROR_NONE) {

        $snippet = substr($rawResponse, 0, 500);

        *throw* new \\RuntimeException('Invalid JSON returned by model: ' . json_last_error_msg() . '. Raw response: ' . $snippet);

    }



    *return* $json;

}

the request goes through
but my api respoinse contains emtpy reasoning and no actual answer:

API response:
{
“id”: “resp_68a605ca5f38819fb4da43a2b788687d06032eb824e2530e”,
“object”: “response”,
“created_at”: 1755710922,
“status”: “completed”,
“background”: false,
“error”: null,
“incomplete_details”: null,
“instructions”: [
{
“type”: “message”,
“content”: [
{
“type”: “input_text”,
“text”: “Role: You are // omitted ues”
}
],
“role”: “developer”
},
{
“type”: “message”,
“content”: [
{
“type”: “input_text”,
“text”: “part name: 07.Z2.F04-1003\nmanufacturer: KEB”
}
],
“role”: “user”
}
],
“max_output_tokens”: 2048,
“max_tool_calls”: null,
“model”: “gpt-5-2025-08-07”,
“output”: [
{
“id”: “rs_68a605cb2f60819f94115571dc36adac06032eb824e2530e”,
“type”: “reasoning”,
“summary”:
},
{
“id”: “ws_68a605d533c0819fb5245a20c1ef591706032eb824e2530e”,
“type”: “web_search_call”,
“status”: “completed”,
“action”: {
“type”: “search”,
“query”: “KEB 07.Z2.F04-1003 datasheet”
}
},
{
“id”: “rs_68a605d8fffc819f9256281b3c62b43506032eb824e2530e”,
“type”: “reasoning”,
“summary”:
},
{
“id”: “ws_68a605db335c819fbe2ef9ea5d71e0f206032eb824e2530e”,
“type”: “web_search_call”,
“status”: “completed”,
“action”: {
“type”: “search”
}
},
{
“id”: “rs_68a605dbec08819f8c0e6c4c24f68f8106032eb824e2530e”,
“type”: “reasoning”,
“summary”:
},
{
“id”: “ws_68a605dc18a4819fb86f6fc1cef78f3806032eb824e2530e”,
“type”: “web_search_call”,
“status”: “completed”,
“action”: {
“type”: “search”
}
},
{
“id”: “rs_68a605dd3ba8819f938073f83fc2a4a606032eb824e2530e”,
“type”: “reasoning”,
“summary”:
},
{
“id”: “ws_68a605dfe8d0819fa98472095faaa95106032eb824e2530e”,
“type”: “web_search_call”,
“status”: “completed”,
“action”: {
“type”: “search”
}
},
{
“id”: “rs_68a605e0c7a4819faa5b0445fa3baf7d06032eb824e2530e”,
“type”: “reasoning”,
“summary”:
},
{
“id”: “ws_68a605e1e1e8819f91606f6e5370a2da06032eb824e2530e”,
“type”: “web_search_call”,
“status”: “completed”,
“action”: {
“type”: “search”
}
},
{
“id”: “rs_68a605e2919c819fb630d6ed76b63fee06032eb824e2530e”,
“type”: “reasoning”,
“summary”:
},
{
“id”: “ws_68a605e45874819f88e587de66ddc2ea06032eb824e2530e”,
“type”: “web_search_call”,
“status”: “completed”,
“action”: {
“type”: “search”
}
},
{
“id”: “rs_68a605e5c7f0819fb728bb09608071fa06032eb824e2530e”,
“type”: “reasoning”,
“summary”:
}
],
“parallel_tool_calls”: true,
“previous_response_id”: null,
“prompt”: {
“id”: “pmpt_68a5ee06a7808195909d2bff59c5732704d594b95d42727f”,
“variables”: {
“part_name”: {
“type”: “input_text”,
“text”: “07.Z2.F04-1003”
},
“manufacturer”: {
“type”: “input_text”,
“text”: “KEB”
}
},
“version”: “3”
},
“prompt_cache_key”: null,
“reasoning”: {
“effort”: “high”,
“summary”: null
},
“safety_identifier”: null,
“service_tier”: “auto”,
“store”: true,
“temperature”: 1,
“text”: {
“format”: {
“type”: “json_schema”,
“description”: null,
“name”: “part_technical_specification”,
“schema”: {
// schema omitted
},
“strict”: true
},
“verbosity”: “medium”
},
“tool_choice”: “auto”,
“tools”: [
{
“type”: “web_search_preview”,
“search_context_size”: “medium”,
“user_location”: {
“type”: “approximate”,
“city”: null,
“country”: null,
“region”: null,
“timezone”: null
}
}
],
“top_logprobs”: 0,
“top_p”: 1,
“truncation”: “disabled”,
“usage”: {
“input_tokens”: 108532,
“input_tokens_details”: {
“cached_tokens”: 87680
},
“output_tokens”: 1664,
“output_tokens_details”: {
“reasoning_tokens”: 1664
},
“total_tokens”: 110196
},
“user”: null,
“metadata”:
}

if i change the prompt model to 4.1 I get an output with the expected contents

any advice?

I think there is a bug introduced with the recent changes in the prompts UI at the dashboard.

There used to be possible to set a max_output_tokens parameter, which is no longer available.

However… it is saved on the prompt, in an unreachable variable.

For now, try adding max_output_tokens = 10000 to your request (or more, depending on what you expect your output to be, and try again. By doing so, it will override the prompt settings that are currently unreachable.

The default used to be 2048, which in case of reasoning models, they cause empty outputs if the limit is reached.

If the generated tokens reach the context window limit or the max_output_tokens value you’ve set, you’ll receive a response with a status of incomplete and incomplete_details with reason set to max_output_tokens . This might occur before any visible output tokens are produced, meaning you could incur costs for input and reasoning tokens without receiving a visible response.

Yeah I’ve now figured this out which is ridiculous

thank you for letting me know

I think it may be intercepted by api layer. Gemini-2.5 also occur it.