Hello, I just want to confirm or check: when users report getting an “Error in input stream” error in ChatGPT responses, is it the fault of my actions/prompts or does it belong to ChatGPT?
Also i received a lot of people reporting that recently GPT4 writing the codes outside the code block.
The code regeneration and
error in input stream seem to cut from the same cloth.
I believe that when a code generation is cut early (typically from token count) OpenAI will automatically continue the generation, possibly by checking if the code quotes are closed or not.
I was able to somewhat confirm this by looking at the HTML generated (I refuse to spend more than 3 minutes debugging any OpenAI issues)
This is when the code generation gets cut. I bet if I were to reproduce this error I would see some changes in the network logs. I imagine this HTML would look similar to when ChatGPT appends with new information from tools.
Two chat messages are created. The first message gets cut, and then something is triggered (because it’s unfinished code) and sends a “continue” for generation
As opposed to a typical completion:
So TL;DR: OpenAI servers are having hiccups. Gotta ride it through. Not your fault.