When I’m requesting a runstep I was hoping that I’d be able to get the partial message like you get when using chat gpt from the website. However while the runstep object has a status of “in-process” and I request the message returned in the runstep “message_id”; that message has an empty content until the runstep object has a status of “complete”. Is there a way of getting the partial message? Or is that partial message deal in the chat gpt experience just a UI element?
No, there is no way of obtaining the AI language of step inputs or outputs.
That would allow you to count tokens yourself and hold OpenAI accountable.
It was possible for me, then the tool_call to code interpreter started taking things too personal… I managed to explode all step messages sent even though the step won’t send the last message. This worked then the user messages that use the code_interpreter at times the message created is an empty one for assistant and filled after like 2s…I’m very disaapointed that to get an answer alone as long as you’re using the Assistant API you get ready to make 10+ calls like I did to just capture every single thing.
Know the worst part? If you don’t introduce sleep in your polls you also get penalized for too many requests within a given time.
I can provide you the code I came out with but it’s in PHP using the Openai-php/client lib. You may find a clue to your issue
I’ll take anything you got. PHP will be fine. I’ll just have GPT explain it to me
I’m facing a similar issue. Would you care to share your code with me?