Encountering Error while content stream from OpenAI API

Using expo tsx file, trying to fetch data directly from user side.

Problem accurs when i try to get the data back

Error:

Possible unhandeled promise rejection

TypeError: Object is not async iterable

Request for Stream

const Response = await openai.chat.completions.create({
    model: "gpt-3.5-turbo-0125",
    stream:true,
    messages: updatedChat
  });

  let fullResponse = "";

  for await (const chunk of Response) {
    process.stdout.write(chunk.choices[0]?.delta?.content || "");
  }

Async function

const handleUserInput = async ()=>{

    let updatedChat =[
        ...chat,
        {
            role:"user",
            content: userInput,
        },
    ];
    setLoading(true);


  const Response = await openai.chat.completions.create({
    model: "gpt-3.5-turbo-0125",
    stream:true,
    messages: updatedChat
  });

  let fullResponse = "";

  for await (const chunk of Response) {
    process.stdout.write(chunk.choices[0]?.delta?.content || "");
  }

//   const modelResponse = Response.choices[0].message.content

   


    // try{
      
        
    //     if(modelResponse){
    //         const updatedChatWithModel = [
    //             ...updatedChat,
    //             {
    //                 role:"assistant",
    //                 content:modelResponse,

    //             },
    //         ];
    //         setChat(updatedChatWithModel);
    //         setUserInput("");
    //     }

        
    // }catch(error){
    //     console.log(error)
    // }finally{
    //     setLoading(false)
    // }

};

Trying to get the stream data back using tsx file directly from client side code.
Encountered an error:

Possible unhandeled promise rejection

TypeError: Object is not async iterable

Any news on the topic? I’m encountering the same issue with assistants, also using expo.

I added in a Node js based backend and stopped trying to get results directly from the client side code.

You can accomplish this on both the client & server by using the Streams API. (optionally) Use the back-end to set a transform for the stream (mainly filter out the client-useless data) and then pass the Reader object down to then consume and stream on the client-side.

1 Like

Same happens here…seems a bug in openai api

export const getOpenaiStreamResponse = async (userMessage) => {
    // Agregar el mensaje del usuario al historial
    chatHistory.push({ role: "user", content: userMessage });

    try {

        // Crear un elemento de mensaje vacĂ­o para el asistente
        const chatBoxMesages = document.getElementById("chat-messages");
        const messageElement = document.createElement("div");
        messageElement.classList.add("chat-messages", "assistant");
        messageElement.style.textAlign = "left";
        messageElement.innerHTML = `<div class="message-text-user">${chatbotSettings.chatbotName}</div>
                                    <div class="message-icon-assistant"><img src=${chatbotSettings.chatbotMessageAssistantIcon} alt="user-assistant"></div>
                                    <div class="message-text-assistant"></div>`;
        chatBoxMesages.appendChild(messageElement);
        chatBoxMesages.scrollTop = chatBoxMesages.scrollHeight;

        const assistantMessageElement = messageElement.querySelector(".message-text-assistant");

        // Llamada al método chat.completions.create en streaming
        const stream = openai.chat.completions.create({
            model: "gpt-4o-mini",  // Modelo de OpenAI
            messages: chatHistory, // Enviar el historial limitado
            temperature: 0.7, // Control de creatividad
            max_tokens: 100, // Máximo de tokens en la respuesta
            stream: true, // Activar el streaming
        });

        // Escuchar los datos que llegan en streaming
        for await (const chunk of stream) {
            const assistantMessageChunk = chunk.choices[0].delta.content;

            // Si hay contenido, lo mostramos
            if (assistantMessageChunk) {
                process.stdout.write(assistantMessage); // Imprimir mientras llega la respuesta
                assistantMessageElement.textContent += assistantMessageChunk;
                chatBoxMesages.scrollTop = chatBoxMesages.scrollHeight;
                console.log("Respuesta parcial " + part.message.content);
            }
        }

        // FinalizaciĂłn del mensaje
        console.log("\nAsistente ha terminado de responder.");
    } catch (error) {
        displayMessage("Lo siento, no puedo responder en este momento. Intenta más tarde.", "assistant");
        console.error("Error en la API de OpenAI:", error);
    }
}

Error in console browser;

Error en la API de OpenAI: TypeError: stream is not iterable
    getOpenaiStreamResponse openaiCode.js:80
    getAssistantResponse chatbotscript.js:75
    sendMessage evenshandler.js:33
    <anonymous> evenshandler.js:22
    EventListener.handleEvent* evenshandler.js:19
openaiCode.js:96:16
    getOpenaiStreamResponse openaiCode.js:96
    getAssistantResponse chatbotscript.js:75
    sendMessage evenshandler.js:33
    <anonymous> evenshandler.js:22
    (AsĂ­ncrono: EventListener.handleEvent)
    <anonymous> evenshandler.js:19