Parse streamed JSON from OpenAI API in Next.js

I have the following code:

import { NextFetchEvent } from "next/server";
import { OpenAIStream, StreamingTextResponse } from "ai";
import { z } from "zod";

import { env } from "@/env.mjs";
import { TIER_AICOPY_FEATURE_ID, TIER_EXTRACOPY_FEATURE_ID } from "@/config/tierConstants";
import { openAI } from "@/lib/ai";
import { tier } from "@/lib/tier";


if (!env.OPENAI_API_KEY) {
  throw new Error("Missing env var from OpenAI");
}

export const runtime = "edge";

const inputSchema = z.object({
  prompt: z.string(),
  userId: z.string(),
});

const generateCopyStream = async (input: string) => {
  const prompt = `You are now embodying the persona of an encouraging pastor \"${input}\".\n\nThis is the answer you came up with:\n\n`;

  const response = await openAI.createCompletion({
    model: "text-davinci-003",
    prompt,
    temperature: 0.85,
    max_tokens: 1000,
    top_p: 1,
    frequency_penalty: 0,
    presence_penalty: 0,
    stream: true,
    n: 1,
  });

  const stream = await OpenAIStream(response);

  return stream;
};

export async function POST(req: Request) {
  try {
    const json = await req.json();
    const body = inputSchema.parse(json);

    const tierAnswer = await tier.can(`org:${body.userId}`, TIER_AICOPY_FEATURE_ID);

    if (tierAnswer.ok) {
      const stream = await generateCopyStream(body.prompt);

      await tierAnswer.report();

      return new StreamingTextResponse(stream);
    } else {
      const tierExtraCopyAnswer = await tier.can(`org:${body.userId}`, TIER_EXTRACOPY_FEATURE_ID);

      if (tierExtraCopyAnswer.ok) {
        const stream = await generateCopyStream(body.prompt);

        await tierExtraCopyAnswer.report();

        return new StreamingTextResponse(stream);
      } else {
        return new Response("You expired your credits and need to upgrade!", {
          status: 402,
          statusText: "You expired your credits and need to upgrade!",
        });
      }
    }
  } catch (error) {
    if (error instanceof z.ZodError) {
      return new Response(JSON.stringify(error.issues), { status: 422 });
    }

    return new Response(null, { status: 500 });
  }
}

How can I parse the JSON formatted response from openai api stream into a readable and clean UI experience for the user?

Welcome to the OpenAI community @Meistro

To parse stringified JSON that is being streamed chunk by chunk, you’d have to wait for the stream to complete, and then parse the stringified json with methods like json.parse().

You’ll also want to make sure that the stream stopped because it reached completion.

1 Like

Following on from what @sps has said, it depends on your level of coding familiarity, one could handle the server side events arriving in with a custom function to real-time process JSON in some more human friendly way, e.g. { would be treated as a new line and indent and } as a newline and outdent. you could then do some other programmatic stripping of quotes and colons and get well into the weeds with layouts.

This would be an interesting but rabbit hole kind of project.

2 Likes

From the npm page of Vercel’s ai module, you will need the useChat hook in the client side. I have not tried it myself but from just from looking at the sample code:

const { messages, input, handleInputChange, handleSubmit } = useChat()

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.role}: {m.content}
        </div>
      ))}
     ...

m.content will contain the streamed response and it will be updated dynamically by the hook itself (using the id key).

This would only return the JSON response, not a good user experience. Would you happen to know by any chance how to show it in a better form to the user?