I just got in Playground A.I. for tonight’s session and they still tell me that my queue is too damn high! When will this waiting time end!? I just got back in after 24 hours, now I want to start generating my stuff again! I can’t take it anymore! I have no patience! I hate to wait! Playground will NEVER let me back in at this rate!
Hey folks, I’ve been experiencing this lately as well. It’s super weird because for the past few weeks it’s worked perfectly fine. It also happens randomly. Anyone been experiencing this issue today specifically?
What was wrong with your function? From what I understood the “queued” state of a run was determined from OpenAI’s side
How did you solve the problem regarding the slow response times
So my problem was that I used the API to write another function for a self-developed user interface, and I didn’t update my run status (something like wait_on_run function from the document) and that’s why it took forever for the state to be updated from “queued” to “in progress”, you can try to implement this function to see if it work:
def wait_on_run(run, thread):
while run.status == "queued" or run.status == "in_progress":
# print("the run is still in process, please wait a second.")
run = client.beta.threads.runs.retrieve(
thread_id=thread.id,
run_id=run.id,
)
time.sleep(1)
return run
I used your function but still get an endless stream of:
print("the run is still in process, please wait a second.")
I’m facing response delay for the second message.
message = client.beta.threads.messages.create(
thread_id=thread_id,
role=“user”,
content=PROMPT + user_input,
file_ids=file_id,
)
here, I got the response within seconds for first message and all others. but for the second message this part alone took 120 seconds. I’m facing this issue constantly
Found a duplicate thread with the same issue. Just posted there.
@
Could you share your thread or run ID so that we can take a look?
Hey, I am facing the same issue today. May I know how did you resolve it?
In my case, it resolved itself after about 8 hrs. Seems to be more of an internal issue.
I was having the same problem and this workaround did it for me. Instead of the pass, it is used to stream the response by printing the text, but as a backend functionality, it works to update the messages list.
tool_outputs = handle_requires_action(run)
with client.beta.threads.runs.submit_tool_outputs_stream(
thread_id=run.thread_id,
run_id=run.id,
tool_outputs=tool_outputs,
event_handler=AssistantEventHandler(),
) as stream:
for text in stream.text_deltas:
pass
Same issue again.
Examples:
- thread_TAlQHyssLTpyqjLFncXifIHZ, run_mYxRbtHOtxJTFMHa4vy62CIk, queued
- thread_pDCF0mAE3FMciNXtpWH0TXbC, run_LEBL2jLhocH3PkDqBnjDCqSC, cancelling
it’s happeing me too as well it is how munch words we put in the box?
hello,can you help me
thread_IGMGAbMZnSvoI1GFWXFC93zx
run_rRIj0KMvQpOH9HPtPRmGIsqw