A few minutes ago I was testing code just fine with no problem and suddenly all the runs I submit immediately go into “incomplete” status. Is this an outage of sorts, or what would cause this? I have plenty of credits in my account and run.last_error is None.
It has been going on for the better part of 30-45 minutes.
Hello @satoshin , I am facing the same issue it gets stuck in incomplete status for long time and then in some response directly gives Last error was LastError(code=‘server_error’, message=‘Sorry, something went wrong.’)
I am stuck since last week do you have any updates on same ?
I tried with different max_prompt_tokens and max_completion_tokens from null to 50000. Yet, kept getting an incomplete run.
The issue resolved for the particular thread, when I set the truncation_strategy to last_messages
And the issue would re-surface on the same thread, if I let the truncation_strategy take the default i.e. auto.
The incomplete reason, though is always - incomplete_details: {"reason":"max_prompt_tokens"}
Overall, the behaviour seems more stable with the truncation_strategy of last_messages, though I have seen an incomplete run once even with this strategy
@jknt Agree I did the same strategy, when you deal with max tokens it always stuck
with incomplete status and if truncation_strategy is auto then also it keeps failing after certain questions while setting last_messages makes the thread stable
@stevecoffey Okay on a little more digging, I see that while the API gives the error, internally the model seems to have generated 11 replies to a single prompt of hi. Please check the last set of exchanges on the thread.
The replies can be viewed on the thread https://platform.openai.com/threads/thread_U0xzyGVHXEyOGGzCnpZAIU3Z
Whereas the API continues to return an error with incomplete status.
Another thread which is leaking the messages with the reason - {“reason”:“max_prompt_tokens”}
thread_rt7uedLtejTlEUOXqBjWjzLb
So it seems, there is a bug in the tooling around this.