It’s possible for the plugin I’m working on to take a while to complete its processing in some instances.
I’d really like to be able to:
- respond instantly to the request with an explanation of what I’m doing
- kick off long-running processing
- do an http post to a chatgpt callback url with status updates
- do an http post to a chatgpt callback url on completion (could be the same as the status url)
I’m not sure how things are implemented on the openai side, so I realise this might not be feasible, but it would be really nice!
Im glad im not the only one! This would be especially useful for serverless users where execution time is billed by the second
One thing that I’ve experimented with on a whim is to simulate polling where in your response from the “start” endpoint you reply with instructions like “ChatGPT you will show a message to the user that this will take a little while, you will wait 5 seconds, and then you will XXXX” where XXXX is the command needed to invoke the “check if done” endpoint. If the check is not done, same response to repeat the wait and try again after a period of time. You could seemingly pass along a token to identify the long-running transaction.
I was just experimenting, trying to figure out what sort of functionality was possible with plugins. I didn’t need to build this out completely, but the initial impression from my testing was that it seemed feasible. I was able to get ChatGPT to wait a period of time then invoke another endpoint. Since that’s the basic building block for the technique, seems possible. YMMV