NodeJS Assistant Streaming + Tool Call Problem

hello i try to implement tool call with the Streaming option for the Assistant. but ran into a problem:

    gptConversation['main_run'] = openai.beta.threads.runs.createAndStream(gptConversation['main_thread'].id, {
        assistant_id: gptConversation['main_assistant'].id
    })
        .on('textCreated', (text) => console.log(text))
        .on('textDelta', (textDelta, snapshot) => {
            saveConversation(gptConversation, textDelta.value)
            console.log(textDelta)
            console.log(snapshot)

            let gptC = []
            gptC.push(gptConversation)

            let body = {
                jsonData: {
                    type: "update",
                    query: {
                        gptConversation: gptC
                    }
                }
            }
            let token = authService.getBearer()

            BackendService.post(url, body, false, token).then((res) => {
            }, (error) => {
                console.log("Backend Post failed", error)
            })

        })
        .on('toolCallCreated', (toolCall) => {
            toolCallId = toolCall.id;
            functionName = toolCall.function.name;
        })
        .on('toolCallDelta', (toolCallDelta, snapshot) => {
            toolCallArgs += toolCallDelta.function.arguments;
            const completeArgs = checkToolCallArgsComplete(toolCallArgs);
            if (completeArgs !== "") {
                console.log("toolCallId", toolCallId)
                console.log("functionName", functionName)
                console.log("completeArgs", completeArgs)
                let functionResponse = JSON.stringify( aiFunctions[functionName](gptConversation,completeArgs));
                saveConversation(gptConversation, toolCallDelta)

                let gptC = []
                gptC.push(gptConversation)

                let body = {
                    jsonData: {
                        type: "update",
                        query: {
                            gptConversation: gptC
                        }
                    }
                }
                let token = authService.getBearer()

                BackendService.post(url, body, false, token).then((res) => {
                }, (error) => {
                    console.log("Backend Post failed", error)
                })
                try {
                    const run = openai.beta.threads.runs.submitToolOutputs(
                        gptConversation['main_thread'].id,
                        gptConversation['main_run'].id,
                        {
                            tool_outputs: [
                                {
                                    tool_call_id: toolCallId,
                                    output: functionResponse,
                                },
                            ],
                        }
                    );
                } catch (e) {
                    console.log("Run error", e)
                }

            }
        })

the error i get is
" message: “Can’t add messages to THREAD_ID while a run RUN_ID is active.”,"

i have no idea how to submitt the tool output and the next problem is i dont get any text answer like “Tht sounds nice etc etc”

if a toolcall is happening no other output will be provided , i try now to figure out sience over 4 Days and slowly it goes expensive for me xD

When the tool return data, this mean the model made a decision, next step is to take the JSON output (your tool) to do an action.

i get an tool output , but dont know how i send the output back
i get the error message “” message: “Can’t add messages to THREAD_ID while a run RUN_ID is active.”,“” , then in try send a new message but get the event “end” that tells me the run is finished.

and without streaming i get a text back like a chat message with openai , what is now not the response of the assistant.

so i have a run what never finishes .

In case you didn’t figure this out yet or if someone else finds this thread here’s how I did it. First you’ll need the run and the thread id of the tool call. You can use current_run event from the Assistant helper methods (it would look like current_run = self.current_run in python. Then current_run.thread_id) Then you would use client.beta.threads.runs.submit_tool_outputs to submit the results of your processing to the run, then it will continue. Here’s my function that takes in an array of tool outputs to submit in the case theres multiple function calls:

def submit_all_tool_output(thread_id, run_id, tool_outputs):
final_tool_outputs =
for tool_call_id, output in tool_outputs:
single_output = {“tool_call_id”: f"{tool_call_id}“, “output”: f”{output}“}
print(f"Single Output: {single_output}”)
final_tool_outputs.append(single_output)
#print(f"{timestamp()} - Complete tool_outputs: {final_tool_outputs}")
# Submit all the tool outputs at once
run = client.beta.threads.runs.submit_tool_outputs(
thread_id=thread_id,
run_id=run_id,
tool_outputs=final_tool_outputs
)

You need to wait until the event status becomes requires_action and required action type is submit_tool_outputs then call submitToolOutputsStream which seem to be not written in the Reference page for streaming.

const stream = await openai.openai.beta.threads.runs.submitToolOutputsStream(
            thread_id,
            run_id,
            {
                tool_outputs
            }
        )