I have an MCP server + MCP App tool where the UI renders correctly in ChatGPT web and sizing works, but context information sent from the UI to the model is not applied.
The same implementation works as expected in Claude web, following the MCP Apps spec.
Is this a known limitation or compatibility gap in ChatGPT’s MCP Apps support, or could I be missing a required extension (e.g. widget state / model-context updates)?
I had similar issues and it could be, in your case, that your tool call result hits the “per tool call token limit” in ChatGPT and is silently drop. Yes.
In my case ChatGPT wasn’t even silent, it blamed our server that it would not be reachable, which wasn’t true.
Claude has a much much larger per tool call token limit, that’s why it probably works in Claude.
In the end I had to refactor the tools to be more atomic, which on the flip side comes with idempotency problems and still does not work that well with ChatGPT tbh.
I would like to see two things from OpenAI:
Do not cancel tool calls without meaningful error.