Assistant is simply echoing the user message back

This is a new issue that just started today, Feb 27. The assistant in question was working fine yesterday. Today, no matter what we input as a role=“user” content, we are getting a message back that just repeats that content back. I updated to the latest python API openai==1.64.0 and it didn’t help.

Sample code:

message = openai_client.beta.threads.messages.create(thread_id=chat.thread,
                                                                 role="user",
                                                                 content=user_content)
run = openai_client.beta.threads.runs.create_and_poll(thread_id=chat.thread,
                                                                  assistant_id=p.assistant)
messages = list(openai_client.beta.threads.messages.list(thread_id=chat.thread))
content = messages[0].content[0].text

The assistant is using model gpt-4o with a file_search tool.

With any assistant we have, user_content and content end up identical after this call.

Anybody else seeing this today?

1 Like

Pretty simple. Get the messages specific to a run ID.

thread_new_message = client.beta.threads.messages.list(
            thread.id,
            run_id = run.id
)
content = thread_new_message.data[0].content[0].text.value

Otherwise you are getting a full list of messages and picking only off the limit of what can be returned at defaults.

Thanks for that clarification. But when I include that, now I’m just getting no messages (messages is just []). So perhaps the problem I’m seeing is just poor error handling in the assistants API, where when something goes wrong, it doesn’t signal any errors, but leaves the result empty. And by not including the run_id, I was picking up my original prompt by default.

This would be consistent with other non-assistant queries, which are frequently telling me I’m out of quota, even though I’m nowhere near out of quota. But based on this forum, that seems to be a common issue right now.

You should also be checking the status of the run in there. This is important for seeing if the AI is making a requires_action for a tool call. Or if it returned another “bad” status.

1 Like

Thanks again! I see that in the documentation now. Must have been missing from the bootstrap when I first implemented this.

The status is “failed” so this all makes sense. My API calls are all being rejected with bogus quota errors, and that was just the weird failure mode because I wasn’t checking the run status. I’ll jump to that other quota thread to see what’s going on with that.

1 Like

See that you aren’t out of credits!

1 Like

Okay. That’s crazy. I was out of credits. Even though I have auto-recharge enabled. What’s the point of automatic recharge if it doesn’t automatically recharge? Maybe there’s a bug in billing?

Anyway, I have credits now, and the error is gone.

You’re a life-saver. Thanks!