GPT-5 Responses API not returning output in WordPress (GPT-4 works fine, no error but empty response)

I’m developing a custom WordPress plugin that uses the OpenAI API. GPT-4 and GPT-4-mini models work perfectly. However, when switching to GPT-5 or GPT-5-mini, the request completes successfully (HTTP 200) but the response body is completely empty — no output, no error message.

My organization verification is complete, and the API key is valid. I’m calling the endpoint https://api.openai.com/v1/responses with a properly formatted JSON payload (model, input, reasoning, max_output_tokens, etc.).

Here’s the log message from WordPress:

2025-10-23 10:17:21  
Primary model failed (OpenAI response not in expected format).  
Switching to fallback model.

I’m using wp_remote_post in PHP. Everything works fine with GPT-4, but GPT-5 returns an empty response every time.

What could cause this?
Does the GPT-5 Responses API require a different JSON structure, special headers, or any additional parameter for PHP / WordPress clients to receive output correctly?

Welcome to the community @qbkmtdnjzj

Can you confirm if gpt-5 is present in the models.list API call?

Additionally, are you able to make a cURL call to the responses API with the model from your machine:

curl https://api.openai.com/v1/responses \
  -H "Authorization: Bearer $OAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5",
    "input": "Hello World!",
    "max_output_tokens": 800
  }'

First of all, thank you very much. Yes, I can do all of that and it works fine. I can see all GPT-5 family models in the models.list, and I can get responses from the terminal — but I still haven’t been able to fix the issue in WordPress.

Interesting. What value for reasoning.effort are you setting?

Can you share more about the config on the wp_post_plugin ?

  1. You will need to request max_output_tokens of thousands and thousands, or just not send it at all. Otherwise, you will not get a response.
  2. You will need to understand the return. The Responses API has “output” that is an array, containing multiple items. Items such as reasoning summary, encrypted reasoning, and then a message or a tool call if you are lucky. If you are not iterating through the output items to get the display content…there’s your problem.

Example response:

print(json.dumps(response, indent=2))
{
  "id": "resp_6546461896816",
  "object": "response",
  "created_at": 1761243770,
  "status": "completed",
  "background": false,
  "billing": {
    "payer": "developer"
  },
  "error": null,
  "incomplete_details": null,
  "instructions": "A concise assistant provides brief answers.",
  "max_output_tokens": 4000,
  "max_tool_calls": null,
  "model": "gpt-5-mini-2025-08-07",
  "output": [
    {
      "id": "rs_012354654",
      "type": "reasoning",
      "encrypted_content": "gAAAAABo-nKA8vJSdwSHBe0j7cWSK1O0fYLY-rr0QS5yR2562WUqCZfm9ER4RFLhuJfDbKIM4Eb4lRakh0YDRbg21FoZPPuTBtlagRqca9sjcsh60-GVxngBcxpXA4c2pSV6jQwg1aHRU-_Z1GfH6MSWlNWfGMDYbXojS-IzBvDXAZgCP-NVpE2v5n_v1VnY1G92ii1T_l4Fpp1iIQ46pmxP2NSYim8G7SpEdyAeDs0bN67RUbArBgt9-Vu-LDJn5Lnqg_2URXs3IDjMV51XWDgMXzGT51ws_-24EC_BC5G516Q4AKxUPvaaxwGD71jTznqQvoASnVkA3AJz0x_jB5S5sjzO41tVw4oNNg-w-Hv8IIsBTNDsArQz7F35nASBgLbR9huw2PDAFeacREugf8oKRzWuOin2_PeLcp5yrrkCWgWL_WPWQC3ycRx-LJTC4cHpQer5qmUyXI68JdOskU_lHBNAX_26d01xxVraosYBQ0ngVPrM1NTLGxNGutPRImLoZWm6u0TlhCGTpZhQSGukBMqOUB2NPDJ3iAza-Lp2vHr5J3F1vPlxykJcVWdnRa5rBKJ1067KLSRuidm-6TIaQKf09tmcFMsEJ5DKCchi42HRMzLq42TD38H1-deyKEL2QGYwzLvWioi5PhUG0ueRnL2obcNl_QtOD_JCL7SpzwB0nW27LZL7RL71aXEaHGc7hY-a-MwFSrQVNGcSieWll0x9YR2jhA_Ex43SBgCTVRiH8nXnwAtJUNq-uDzn0IlRLh_TlLC7fSFhH4ZqDWrMv-QGiqKPfCQI2ILH9g6M71d6sKl-DBMs_UivJLoC6jEzFpvEu_T32RjxUDgQ6Z1tItjRdBNv4a70ylpWFByMnuGpKxzufB8RHdsbWxm-K73uJe_7LtrJOCqFHYYQ-dO7O7W_oHKPuKBl2G8tRJlhIgB5XFGx1WBG6aJT5ABECmmS-db12qf_1DclzdE3E1hGL1JtHZllp-seTeiaIEKOKSmqOWaV5cQilyJsOzKZ1e78af9CY_sV",
      "summary": [
        {
          "type": "summary_text",
          "text": "**Responding to user**\n\nThe user said \"Ping!\" which likely means they expect a \"Pong!\" response. I think it's a good idea to keep my reply concise. Since I want to be helpful, I'll respond with \"Pong!\" and also offer to assist them with anything else they might need. It's nice to be engaging while keeping it simple! I\u2019m ready to assist if they want anything more."
        }
      ]
    },
    {
      "id": "msg_6544519811",
      "type": "message",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "annotations": [],
          "logprobs": [],
          "text": "Pong! Need anything?"
        }
      ],
      "role": "assistant"
    }
  ],
  "parallel_tool_calls": false,
  "previous_response_id": null,
  "prompt_cache_key": null,
  "reasoning": {
    "effort": "low",
    "summary": "detailed"
  },
  "safety_identifier": null,
  "service_tier": "priority",
  "store": false,
  "temperature": 1.0,
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "medium"
  },
  "tool_choice": "auto",
  "tools": [],
  "top_logprobs": 0,
  "top_p": 1.0,
  "truncation": "disabled",
  "usage": {
    "input_tokens": 19,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 12,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 31
  },
  "user": null,
  "metadata": {},
  "output_text": "Pong! Need anything?"
}

I have an “output_text” object there not because I am using the SDK, but because:

response["output_text"] = "".join(
    content_block["text"]
    for message in response.get("output", [])
    if message.get("type") == "message" and message.get("role") == "assistant"
    for content_block in message.get("content", [])
    if content_block.get("type") == "output_text" and "text" in content_block
)