Handling GPT Stream Safely

Hi all,

I’m trying to set up a GPT stream for my application. I was looking for any feedback or suggestions for my code. Currently I have:

export async function* handleGptStream(
  params: OpenAI.Chat.ChatCompletionCreateParams
) {
  try {
    const completionStream = await openai.chat.completions.create({
      ...params,
      stream: true,
    });

    for await (const chunk of completionStream) {
      // handle end of stream
      const finishReason = chunk.choices?.[0]?.finish_reason;
      if (finishReason === "stop") {
        console.log("gpt stream finished");
        return;
      }
      // handle message deltas
      const content = chunk.choices?.[0]?.delta?.content;
      if (typeof content === "string") {
        console.log(`yielding chunk: ${content}`);
        yield content;
      } else {
        console.error("malformed gpt response");
        throw new Error("malformed gpt response");
      }
    }
  } catch (e) {
    throw new ThirdPartyError("could not fetch from openAI API");
  }
}

My idea i that I need to first check for the stop finish reason. If I checked for a missing content delta first, I would always throw an error on the final chunk.

Is this in line with how others have handled streaming for their applications?

here is a package, better for handling stream.

npm:

yarn :

1 Like