Please Correct Me. Assistant API

i am linking my Assistant API to my Bubble APPS.
CURL call only

My step after reading Offical Doc :
( pre set Asst ID )

  1. User Input

  2. Create Thread and Run

  3. [this part confuse me] every 0.5 sec check Run Status, If complete goto step 4.

  4. List Message and display.

Why i cant use Assistant API like GPT 3.5 API? just sent input and wait for response?

from Official Doc:
By default, a Run goes into the queued state. You can periodically retrieve the Run to check on its status to see if it has moved to completed.

so i need to create a workflow that looping the check run status to get the response?

Please correct me if i am wrong. Thanks in advance.

Hi and welcome to the community.
Yes you can list the messages added to the thread once the run completes.
I think you already referred to the “How assistants work” part of the documentation.
Here it is a bit more clear:

You can check the run steps for a bit more transparency:


Based on my Assistants I would recommend a ping >1 second. I have settled for 2 seconds.

Yep. We’re just doing something like this:

bool isComplete = false;

while (!isComplete)
    // No point checking for completion right after starting the run anyway.
    await Task.Delay(TimeSpan.FromSeconds(5), cancellationToken);

    var threadRun = await _openAiHttpClient.GetThreadRunAsync(threadRunResult.ThreadId, threadRunResult.Id, cancellationToken);
    if (threadRun.Status == "failed")
        throw new InvalidOperationException($"Error: {threadRun}");

    isComplete = threadRun.Status == "completed";

Being able to listen over an open TCP connection is coming, apparently. It’s in their interest as well since polling like this is likely more overhead for them.

Yes, you should create a loop. My recommendation is that you check the status every second and, after a testing period, observe the average in your responses to make the necessary adjustments. In my case, for text, I have an average response time of 3 seconds and 30 seconds when using a code interpreter. Initially, you will have to sacrifice workload units :frowning: