How to parallel stream completions from Chat API

I need a chatbot that would use streaming response functionality to answer requests on the webapp. There are limitations on concurrent requests for single openai.create generator and I am wondering:

Is it possible to have a parallel streaming request-response webapp with GPT4 without Enterprise solution or Azure?

Hi and welcome to the Developer Forum!

You would have to take a look at the streaming handler in the current API code on git and perhaps relay those events out to your clients.