Is anyone else having problems with ChatGPT today? For me, it’s been super annoying. In ~9 out of 10 requests, it gets to the end of a long output, such as a web search or code interpreter session and then “Error in input stream”, forcing a complete regen.
I was planning to use it to help me build on the new code-interpreter API, but if the API is as flaky as this, maybe I’ll wait. I also wonder how tokens would be charged in the API for these failed requests? Would the api stream the tokens and then error out near the end, or would the error happen server-side and then bill me for that request?
Anyone played much with the new api?
BTW, I’m not complaining, I know it’s all very beta, I’m just chomping at the bit to get my workflow plugged in with the new tools. And code-interpreter, when it works, it works really well!
I’m encountering the same issues, leading me to question if there’s been another reduction or optimization in the context window, adversely affecting longer chats. This situation could unveil a significant business opportunity to market extended context solutions. As a devoted fan, I’m hesitant to seek coding alternatives elsewhere. Additionally, the incredible slowness is a concern. I foresee scaling up as a formidable challenge, especially with numerous competing priorities in play.
I’m having the same issue when working with GPTs. Happens almost every answer.
I know, I really wasn’t exaggerating when I said it’s failing 9/10 times. Even without web search or code-interpreter I am getting this error.
Same issue here. Haven’t been able to get a single, complete answer over the last hour or so.
Also having this issue. Wish they would provide some update.
I am with the same issue since the the last big update with the GPT-4 model. Almost all taks what uses bing comes with this error, “Error in input stream”. And found an error in a new conversation saying its was to much text and asked to start a new conversation.