Issue with Chunk Streaming in ASP.NET Core using GPT-4 API

Hello OpenAI Community,

I’ve been using the GPT-4 (gpt-4-0125-preview model) chat completion stream API in an ASP . NET Core application for a few months now. My setup involves streaming from my client-side service to the OpenAI service. Everything was functioning smoothly until recently when I encountered a peculiar issue without making any new deployments to the server.

The problem is with the streamed object chunks, which are being returned corrupted to my client. An example of the returned object is as follows:

next chunk
Notably, the end of one object and the start of the next object are split across chunks.

Here is the relevant client-side code:

javascriptCopy code

async function* streamAsyncIterable(stream: any) {
    const reader = stream.getReader();
    try {
        while (true) {
            const { done, value } = await;
            if (done) {
            yield value;
    } finally {

if (!res.body.getReader) {
    const body = res.body;
    if (!body.on || ! {
        throw "";
    body.on("readable", () => {
        let chunk;
        while (null !== (chunk = {
} else {
    for await (const chunk of streamAsyncIterable(res.body)) {
        const str = new TextDecoder().decode(chunk);
        let data = processStringData(str);
        data.forEach((item:any) => {
            try {
            } catch (error) {}

Interestingly, when I run the program locally, everything works fine without any issues. However, when deployed on my IIS server, the problems start to occur. I have tested this on several servers, but the issue persists.

Any guidance or suggestions from the community would be greatly appreciated. Thank you!