Assistant APIs is not responding, the run stuck in "in-progess" step

We deployed an application using Assistant APIs, but unfortunately, it’s not working from last week, anyone has any idea what could be the issue.

from openai import OpenAI
client = OpenAI()

agent= client.beta.assistants.retrieve("asst_id")

thread = client.beta.threads.create()

query= "recommend me some courses for improving my sleep"

message = client.beta.threads.messages.create(,

run = client.beta.threads.runs.create(,

while True:
    run = client.beta.threads.runs.retrieve(,

    if run.status == "completed":
        messages = client.beta.threads.messages.list(
        latest_message =[0]
        text = latest_message.content[0].text.value

the run status is “in-progress” for very long and response is not coming from the API, the same assistant works in Playground.

I am experiencing the same problem. Assistant API seems to be quite unreliable at the moment.

it seems to be completely broken right now, see the other thread active in the forum at the moment.

1 Like

The worst thing is when the status page says “All Systems Operational”, that hits the hardest! These guys are still in bed! Try to report the bug or problem through the help portal and I feel like that falls into a black hole .


This is still something we’re experiencing as well.

I am experiencing this as well. I believe it has something to do with SyncCursorPage when you call client.beta.threads.messages.list()

Hi @smbrandonjr

no, it’s not the case with client.beta.threads.messages.list(), we tried logger to identify the issue, it’s stuck in the run.status.

works perfectly in openAI playground

I have the same issue on my product. Assistants API is really dead now. Any runs finish for me - they got killed after 10 minutes of their server ‘thinking’.
It is really underwhelming that the OpenAI dev team ignores the claims

I’ve been trying to get any contact in OpenAI support team, we use an enterprise version and still there’s no support from them. we’re exploring Azure OpenAI services for now, and if the issue persists, we have to move on to another LLMs.

1 Like

We are thinking the same. If OpenAI cannot provide reliable services - then we need to find something more stable for the live product.

Write your own code to provide RAG and inner thought, and use the Chat completion API. Simples.

To be fair to Open AI, though, the Product is marked “Beta”, so really shouldn’t be considered for immediate rollout to Production until it exits Beta:


Its performance, cost efficiency and reliability need to improve.

What seems to have happened is everyone has piled in to use it without considering the state of the Product or its true viability.

1 Like