Connector operations intermittently become "Resource not found" after several successful calls

For the last ~5 days, connector calls have been failing intermittently.

A given operation works correctly several times in a row.
After some number of successful invocations, the next call suddenly fails with no response, on MCP server side I see no requests happening.
Once this issue occurs, even previously working calls to the same operation start failing in the same way, until end of the agent operation.

Interrogation of the agent reveals that is getting error in response of tool invocation:

{
“error”: “Resource not found: /…/add_node. You can retry your request with an available resource. Your available resources are: \n\n{“finite”: true}”
}

When asked agent to repeat the operation it works for a while until it starting to error in the same way again - no Connector reset/reloads required.

This bug randomly kills access to connector operations mid-session, so longer automated flows can’t be trusted to finish and often have to be completed manually. Each failure also wastes tokens on retries, re-discovery of available operations, and extra explanation/debug messages, making even simple tasks more expensive than they should be.

4 Likes

Im not sure if Codex CLI category matches where it actually happens. Issue Im having is on the ChatGPT Plus plan + enabled Developer mode using MCP connector. This happening even with MCP connector with no auth enabled. Is this related to the Codex CLI too? Anyone got similar issues using MCPs over CLI?

Don’t suppose you figured out what was going on? I’m having the exact same thing, wondering if it’s my end or OpenAI..

Unfortunately, I haven’t found any workaround for this. It appears to be an issue on OpenAI’s side.

Our MCP server is working correctly: every ListToolsRequest succeeds, which confirms the server is reachable and returns valid responses. However, when the agent attempts to invoke a tool, the invocation fails with “Resource not found”.

The worst part is what happens next: when this issue occurs (and it happens frequently), the agent does not stop. It keeps retrying tool invocations and burns tokens, because it assumes it’s still executing the task. In reality, due to the failure, no actions are executed on the MCP server at all.

As a temporary mitigation, I had to explicitly instruct the agent to immediately stop operations as soon as this error appears.

I agree it definitely seems like an OpenAI thing. I did notice something new today though, now when it disconnects it shows a little window within the chat wanting to reconnect, which is handy, and that just launches the same connection process but doesn’t interrupt the responses at least. Ever seen that before? I also implemented a refresh token in my OAuth implementation just incase it was that but it made no difference. I just don’t understand why there isn’t more noise about this because surely it’s affecting everyone, including approved Apps!?

Yeah, I’ve seen those authentication/reconnect popups too. For me they started showing up toward the end of 2025, around the same time as the additional tool confirmations: “Using tools comes with risks. Learn more” prompt. The annoying part is that it can’t be “remembered” for the session, so it keeps appearing again and again.

My guess is that not everyone notices this because it may depend on how frequently tools are invoked in a single session. In my case, the workflow requires multiple tool invocations in series: the agent reads files, checks documentation for specific classes, and then applies changes based on the task. When it works, it actually gives me better results than using Codex alone, but with the current “Resource not found” issues, it’s no longer a reliable workflow. Back then, I described the issue using “report bug” form, but never got any feedback.

1 Like

Wonder why I’ve never seen it before, yeah for my implementation I need them to continuously interact with my tool, which does lots of round trips and then blows past the random 10-15 requests that it takes to then drop out. Hoping they fix this soon.