From timeouts to reliable polling (until MCP Tasks land)
TL;DR: The Solution
When your ChatGPT app triggers operations that take 60+ seconds, don’t wait for the tool to complete. Instead:
- Tool returns immediately with a
taskIdvia_meta - Widget polls using a private tool (hidden from LLM)
- Display results inline in the same widget
┌─────────────────┐ ┌─────────────────────────────────────────┐
│ create_thing │────▶│ Single Widget │
│ returns taskId │ │ 1. Polls check_task_result (private) │
│ │ │ 2. On complete → fetch & display inline│
└─────────────────┘ └─────────────────────────────────────────┘
This is the pattern we use at Glinded - The Memories Video Editor for AI-powered operations that take 60-90 seconds.
What We Tried First
Direct Tool Call → Timeout
const generateThing = createTool('generate_thing', async input => {
const result = await longRunningOperation(); // 90 seconds...
return {content: `Generated: ${result.id}`};
});
ChatGPT tool calls timeout after ~60 seconds. Won’t work.
SSE Progress Notifications → No Token
MCP supports progress notifications, but OpenAI doesn’t send progressToken in tool calls. Can’t use it.
Polling + sendFollowUpMessage → Ghost Widgets
We tried having the polling widget call sendFollowUpMessage to trigger a separate result widget. Three problems:
- Loading widget persists - stays on screen showing loading state even after the result widget appears
- Double widget on mobile -
sendFollowUpMessagecauses the widget to appear again when the response arrives - requestClose() timing issues - call it before
sendFollowUpMessageand the follow-up never sends; call it after and the widget reopens
No combination worked. We needed a different approach.
How It Works
The key insight: displaying results after a long-running task is deterministic. We know exactly what to show. No LLM decision needed. So skip sendFollowUpMessage entirely and render results inline.
1. Tool Returns taskId via _meta
const createThing = createTool('create_thing', async (input) => {
const taskRef = await createTaskDocument({ status: 'working', ... });
await triggerBackgroundJob({ taskId, ...input });
return {
content: 'Generation started!',
_meta: { taskId }, // Widget uses this for polling
};
});
2. Private Polling Tool (Hidden from LLM)
const checkTaskResult = createTool(
'check_task_result',
{ ... },
async ({ taskId }) => {
const task = await fetchTask(taskId);
return { structuredContent: { status: task.status } };
},
{ visibility: 'private' } // LLM can't call this - widgets only
);
3. Widget Polls, Then Displays Inline
export default function CreateThingWidget() {
const [state, setState] = useWidgetState({
pollingStatus: 'polling',
data: null,
});
useTaskPolling({
onCompleted: async () => {
const result = await callMcpTool('show_thing', { ... });
setState({
...state,
pollingStatus: 'completed',
data: result,
});
},
}, state, setState);
if (state.pollingStatus === 'polling') {
return <LoadingSpinner />;
}
// Display results inline - no sendFollowUpMessage needed!
return <ThingDisplay data={state.data} />;
}
Gotchas
1. Persist Widget State for Page Refresh
Normal tool calls (invoked by the LLM) have their output cached automatically. But our widget makes its own tool calls (check_task_result, show_thing). These aren’t cached.
Without persisting state, page refresh re-triggers polling. Store final data in widgetState:
const [state, setState] = useWidgetState({
pollingStatus: 'polling',
data: null,
});
// When task completes:
setState({
...state,
pollingStatus: 'completed',
data: fetchedData, // Persisted! No re-fetch on refresh
});
2. Prevent Race Conditions with SWR
If you’re using SWR for polling, be careful with onSuccess. The completion callback might fire with stale data if a new request is in flight.
Chain your onCompleted logic inside the fetcher, not in onSuccess:
// ❌ Race condition: onSuccess fires with stale data
useSWR(key, fetcher, {
onSuccess: (data) => {
if (data.status === 'completed') onCompleted(data);
}
});
// ✅ Safe: check happens inside fetcher
useSWR(key, async () => {
const data = await fetchTaskStatus();
if (data.status === 'completed') {
await onCompleted(data); // Runs before SWR updates cache
}
return data;
});
3. Server-Side Long-Polling
Don’t just return the current status immediately. Poll on the server for ~50 seconds before returning:
async function checkTaskResult({ taskId }) {
const startTime = Date.now();
const TIMEOUT = 50_000; // 50 seconds (before 60s tool timeout)
while (Date.now() - startTime < TIMEOUT) {
const task = await fetchTask(taskId);
if (task.status !== 'working') {
return { structuredContent: { status: task.status } };
}
await sleep(2000); // Check every 2 seconds
}
return { structuredContent: { status: 'working' } };
}
This reduces round trips from dozens to just 1-2 per task completion.
4. Hide Polling Tool from LLM
Use visibility: 'private' so the LLM doesn’t try to call your polling tool directly:
createTool('check_task_result', schema, handler, {
visibility: 'private' // Only widgets can call this
});
Future: MCP Tasks
This pattern aligns with the MCP Tasks specification. When OpenAI adds support, we can migrate from custom polling to built-in MCP polling, or use notifications/progress SSE to keep connections alive.
Until then, this polling pattern gives us reliable, timeout-safe long-running operations in ChatGPT apps.
What’s worked for you?
