Yeah, same here.
Turns out we’ve worked around the issue. In our case, the objects in the stream were being cut in the middle, and then the remainder of the object would come in the next chunk (even tho it’s the same object).
We created a buffer for the case an object from streamed chunk cannot be properly parsed, wait for the next chunk, and join these partial chunks before handling them.
Up until yesterday we’ve never had this issue. Something changed in OpenAi side.
It’s a bit more complex, and we’ve never experienced before either in OpenAI API, or other providers as well, but seems to be working.