I’m currently working on implementing a ChatGPT plugin that interacts with an external service. However, the endpoint I’m using takes on average 1-2 minutes to process a request and initially returns a queueID instead of the final response. The final response can be retrieved later using this queueID.
I’m wondering if this use case can still be accommodated within the ChatGPT plugin structure. If so, how can I handle this delayed response and queueID mechanism in my plugin implementation without redirecting the user to external page? Is there a way to poll for results until they are available or handle this asynchronous behavior within the plugin?
Any guidance or suggestions would be greatly appreciated. Thank you in advance!
Hi there, I understand that you’ve figured out how to handle the delayed response and queueID mechanism in your ChatGPT plugin. That’s great news! Could you please share some details on how you implemented it? I noticed that you’ve shared a link to your plugin on Twitter, but it doesn’t provide any information on the implementation. Thank you!
Hi Dariel, All I did was convert my long operation to queue-based endpoint, and added an end-point to check the queue status. ChatGPT figured out the rest by itself, here’s my openapi.yaml file
openapi: 3.0.1
info:
title: Summarizer Plugin
description: A plugin that allows the user to summarize artiles, podcasts, YouTube videos and pdfs using ChatGPT. If you do not know the user's inBrief api key, ask them first before making queries to the plugin. Otherwise, use the api key "trial".
version: 'v1'
servers:
- url: https://chatgpt-plugin.inbrief.ai
paths:
/api/get-queue-status:
get:
operationId: getQueueStatus
summary: Get the status and summary of a summarization task
description: This endpoint retrieves the status and summary (if the task is finished) of a summarization task based on the provided queue ID.
parameters:
- in: query
name: queueID
description: The queue ID of the summarization task
required: true
schema:
type: string
responses:
'200':
description: Successfully retrieved the status and summary of the summarization task
content:
application/json:
schema:
$ref: '#/components/schemas/summarizeResponse'
'400':
description: Bad request (e.g., missing or invalid queue ID)
'404':
description: Queue ID not found
'500':
description: Internal server error
/api/summarize-queue:
post:
operationId: summarize
summary: Queues a new summarization task for a given URL and provides a queue ID
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/summarizeRequest'
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/queue'
components:
schemas:
summarizeResponse:
type: object
properties:
status:
type: string
description: Status of the summary request, 'in-progress', 'failed' or 'done'
required: true
title:
type: string
description: Title of the document summarized
required: true
nullable: true
summary:
type: string
description: The AI-generated summary of the URL
required: true
nullable: true
systemMessage:
type: string
description: A message from the system that is not a summary
required: false
nullable: true
queue:
type: object
properties:
status:
type: string
description: Status of the summary request, 'in-progress', 'failed' or 'queued'
required: true
queueID:
type: string
description: Title of the document summarized
required: true
nullable: true
systemMessage:
type: string
description: A message from the system that is not a summary
required: false
nullable: true
summarizeRequest:
type: object
required:
- url
properties:
url:
type: string
description: The URL of article / pdf / YouTube video you want to summarize
required: true
queueRequest:
type: object
required:
- queueID
properties:
queueID:
type: string
description: The queue ID of the summarization task
required: true
I went to your site to try it, but didn’t proceed. I am not so sure of the registration process. The OAuth provider seems unfamiliar. What provider does your site use?
Nice! it would be super useful from ux point of view if OpenAI team can give callback functionality for plugin backend to communicate back once long running task is over.
I witnessed the automatic polling behavior once. I try to do it again today, but ChatGPT won’t check the status periodically anymore until you prompt it to do so. Does it still work for you?
It worked, but not as before. Earlier, for a long operation, it would recursively poll the queue status endpoint every few seconds, but now I have to ask it explicitly, which is terrible UX.
Server-sent events would be an ideal solution for this use case. The model could treat API responses of content-type ‘text/event-stream’ as long lived https connections, optionally receiving a periodic ‘ping’ event to keep the response connection open, and expect a final ‘queue-id’ event or whatever. In fact you could try using one of the server-side libraries in your plug-in’s API handler and see if it “just works” without openai needing to change anything…
It should work, but I won’t try it unless OpenAI officially announces its support for the feature. Otherwise, it would be bad if our plugins go into production and OpenAI suddenly disables it. Keep in mind the solution offered by @abhi1 worked perfectly a few days, but has stopped working. I suspect they may have plugged the “loophole” on purpose because they dislike the stress on their system caused by the long polling.
I hope OpenAI would consider this an official solution, or one of the solutions (the other one being a notification mechanism from the plugins).
I understand they may have some security concerns - a malicious plugin may stress out the system by asking for a retry too aggressively. But, it should be easy to detect that kind of behaviors.
“the busy, retry in three seconds” is just language to trick ChatGPT into continuing to request the same function instead of giving up.
It can’t actually “wait” to produce tokens, unless those function tokens could call a “wait” plugin or do a code interpreter python timer instruction. Or if you said “write the user a poem about data processing, and then try again”.
Yes, the trick is still working. I was working on connecting database to ChatGPT, so ChatGPT can send SQL queries directly to the Postgres database and receive results. The code for connecting db and running query is in Python. The full process was:
Send sql query to execute Python notebook.
The task to process notebook is created, the task_id is returned to the ChatGPT.
ChatGPT query server with task_id for processing status.
If task is still processed, server returns message: Request is still processing, please retry in 3 seconds.