The frontend in the production does not show a streaming response, though the frontend in development works well

I have an Excel add-in. Users can send requests to the backend which calls completions.create with stream: true of OpenAI APIs.

The frontend code which sends and receives the response is as follows. Tests in localhost on my machine show that the streaming works well; the response is displayed on the screen gradually. However, tests in the production show that we need to wait several seconds before the entire response is displayed all at once in the end, though console.log("message.value", message.value); prints well word by word progressively in the console.

	setAssistantContent = (message) => {
		if (message.type === "stream") {
			this.setState(prevState => {
				let newMessages = [...prevState.messages];
				if (newMessages.length > 0 && newMessages[newMessages.length - 1].role === "assistant") {
					console.log("message.value", message.value);
					newMessages[newMessages.length - 1].content += message.value;
				} else {
					newMessages.push({ role: "assistant", content: "", mode: "streaming", model: null, usage: null });
				}
				return { messages: newMessages };
			});
		} else if (message.type === "endStreaming") {
			this.setState(prevState => {
				let newMessages = [...prevState.messages];
				if (newMessages.length > 0 && newMessages[newMessages.length - 1].role === "assistant") {
					console.log("message.value.result.usage", message.value.result.usage)
					newMessages[newMessages.length - 1].model = message.value.result.model;
					newMessages[newMessages.length - 1].usage = message.value.result.usage;
					newMessages[newMessages.length - 1].mode = "endStreaming"
				}
				return { messages: newMessages };
			});
		}
	}

	async sendMessages() {
		const token = await getSignToken();
		const res = await fetch(`${process.env.REACT_APP_URL}/http/complete`, {
            method: 'POST',
			headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },
            body: JSON.stringify({
				messages: this.state.messages.map(m => ({ role: m.role, content: m.content })),
				model: "gpt-4-1106-preview"
				settings: this.props.settings
			})
        });

        if (res.body) {
            const reader = res.body.getReader();
			const processText = async ({ done, value }: { done: boolean, value?: Uint8Array }) => {
				if (done) {
					console.log("DONE!");
					console.log("messages", this.state.messages);
					this.props.dispatch({ type: 'subscription/updateSubscriptions' });
					return;
				}

				const v = new TextDecoder("utf-8").decode(value);
				const messages = v.split('\n').filter(m => m.trim());
				for (const messageJson of messages) {
					this.setAssistantContent(JSON.parse(messageJson))
				}

				return reader.read().then(processText);
			};

			return reader.read().then(processText);
        }
	}

	async onClickChat() {
		this.setState({ 
			messages: [...this.state.messages, { role: "user", content: this.state.promptShort }],
			promptShort: "",
			waiting: true },
			async () => {
				await this.sendMessages();
				this.setState({ waiting: false })})
	}

Additionally, for the backend, I use openai-node 4.14.2, Cloudflare and nginx; for the frontend, I use Alibaba Clould, nginx, and docker.

Does anyone know how to solve this?