Using an MCP server with Responses api gives a 500

I created a Prompt that uses my custom MCP server. I set reasoning to high. I then started a conversation on the platform site and it worked very well. It calls my MCP server, I see the calls on my side, and it uses the information in the response. I even used my MCP server successfully in Deep Research on the ChatGPT website.

When I attempt to call the prompt through the API, I get a 500 error (An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID wfr_… in your message).

I tried calling the API with prompt: {id: “pmpt…”} as well as passing the entire object in directly. Neither worked.

If I remove the MCP server tool, the request works. But once I add it back, it fails. I tried background true and false and neither worked. My server is even able to respond to /<redacted> which is a different bug I have seen. But no matter what I do, the API call still 500s. I even verified my organization.

Is this a known issue? I haven’t been able to find any information anywhere about it. Thank you.

Maybe this has something to do with it.

@dankantor I noticed the exact same thing. MCP server working from the playground but not with the api. Did you find a fix for that ?
Thanks

@dankantor I had it fixed by adding the tool again to the request from my code :

body = {…body, tools: [{type: "mcp",server_label: "SERVER_LABEL",server_description: "SERVER_DESCRIPTION",server_url: "SERVER_URL",require_approval: "never",authorization: "TOKEN"}]}
const stream = await this.openai.responses.create(requestBody)

which was not necessary for normal function calls
I am not able to explain more but this worked