We’ve been building MCP Apps for ChatGPT and have hit two separate bugs that, combined, make the platform essentially unusable for production apps.
We are following the recommended approach from the MCP Apps compatibility guide, which positions MCP Apps as the standard way to build app UIs going forward.
Bug 1: _meta stripped from tool results
ChatGPT doesn’t forward _meta from CallToolResult to the widget via
ui/notifications/tool-result. This breaks the documented viewUUID pattern for state
persistence. params.content and params.structuredContent arrive correctly, but
params._meta is always undefined.
Bug 2: ontoolresult not replayed after page refresh
When the user refreshes the page, the iframe reconnects but the host doesn’t re-send the
last tool-result notification. The widget stays on a loading state with no way to
recover.
Impact
These are standard MCP Apps patterns and not edge cases or ChatGPT-specific extensions.
Since the compatibility guide tells developers to “build with the MCP Apps standard keys and bridge by default,” it’s important that the standard bridge works reliably.
Together these make it hard to build apps that work in ChatGPT:
- State persistence via
viewUUIDdoesn’t work (Bug 1) - Widgets don’t survive a page refresh (Bug 2)
Would love to know
Should we keep investing in MCP Apps and wait for fixes, or should we
move back to window.openai? Some clarity on whether these are being tracked and will be
fixed would really help us decide how to move forward.
Related posts: