Even + 60 se delay after 'Starting work'

I’m working on a custom GPT. 1 Action, 7 endpoints.
Environment: Flask, Ngrok, SQL Server db.
Tests in Chrome browser, GPT / Builder / Preview chat.
Test runs in ChatGPT app on Android.

Workflow:

  1. user upload 100 kb PDF, 1 invoice page.

  2. in few seconds, ‘Starting action’ message displayed.

  3. … waiting time 30-90 seconds …

  4. ‘GPT wants to talk to ngrok’ message displayed.
    image

This delay is huge for processing 100 kb pdf one page invoice! It’s simple not feasible for the production stage.

Any idea what’s going on ? In my understanding, that time is consumed by the front-end (aka chat conversation interface provided by chatgpt.openai.com or ChatGPT app for android).

Any solution ?

Same issue here. I’m not ever able to get past this loop. Huge breaking bug for our users.

How to overcome?! Indeed, we are affected : users think that’s our developers bug.

Just FYI. I noticed that there were/are interim openai API errors; that were causing outline async pipelines to stall due to some uncaught exceptions. We trap and retry timeout APIs.

ymmv