High rate of invalid JSON response when streaming response

I’m using NodeJS to call createChatCompletion and stream the response, like so:

 async function openAiCall(maxResponseTokens, messages, modelName) {
      const response = await openai.createChatCompletion(
        {
          temperature: 0.7,
          max_tokens: maxResponseTokens,
          top_p: 1,
          frequency_penalty: 0,
          presence_penalty: 0,
          user: uid,
          model: modelName,
          stream: true,
          messages,
        },
        { responseType: "stream" }
      );

      return response;
    }

I then listen to each the response like so:

...
 response.data.on("data", async (data) => {
          const lines = data.toString().split("\n");

          for (const line of lines) {
            const message = line.replace(/^data: /, "");
           
            if (message === "[DONE]")  return;
            
            if (!message) continue;

            let text = "";

            try {
              const parsed = JSON.parse(message);
              if (parsed) text = parsed.choices[0].delta?.content;
            
 // Message can't be parsed
...

While the function and the response work correctly, the volume of invalid JSON responses for each line has increased a lot. It’s especially high volume with these models: gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613 and gpt-3.5-turbo.

This used to happen less frequently (~ once for every 1k lines) but not happens ~50-100 for 1k lines.

Anyone else experiencing this? Is there anything that can be done to mitigate this issue?

2 Likes

You should have 0 invalid responses. Something in your streaming code is off.

Some lines when looping through for (const line of lines) { are incomplete (missing a chunk at the beginning or end. Here an example ,"created":1687829221,"model":"gpt-3.5-turbo-16k-0613","choices":[{"index":0,"delta":{"content":" of"},"finish_reason":null}]}

Suggested code from OpenAI here uses:

const lines = data.toString().split('\n').filter(line => line.trim() !== '');

There’s also a wide variety of other npm packages and code snippets in that GitHub issue.

1 Like

I’m using the API directly and facing this issue as well.

As you can see from the image below, the json from first chunk is not completed yet and it continue on the second data chunk. Causing invalid json format issue.

@georg-san Did u managed to find some workaround?

Think even OpenAI Playground is facing this issue.

Use the V4 beta for streaming

1 Like

@anon10827405 Thanks for sharing, gonna try now!

I’m not using the OpenAI package but I had the same problem.
Sometimes the chunks are incomplete and will continue on the next loop, giving parse errors.

I solved this in my case by detecting when the chunks do not end with completed objects = ‘}]}’
Then storing that line in variable and removing it from lines array.
And on the next loop add it to the first index of lines.
Ex. lines[0] = inCompleteChunk + lines[0]

1 Like