API Error rate is gradually increasing

Today, up to now, there have been a large number of error reports in api requests, and the error rate continues to increase. I have checked the availability and credit limit of the api key, and have not found any problems. However, there is still a chance of the following errors occurring :

cannot parse chat completion response{“id”:“chatcmpl-xxx”,“object”:“chat.completion.chunk”,“created”:1730102505,“model”:“gpt-4o-mini-2024-07-18”,“system_fingerprint”:“fp_f59a81427f”,“choices”:[{“index”:0,“delta”:{“content”:“xxx”},“logprobs”:null,“finish_reason”:null

It’s obvious that an incomplete request was returned

1 Like

Ten minutes ago, recovery returned to normal, and the error rate decreased significantly.

I will continue to observe, but I am not sure what the problem is.

1 Like

Are you using a module from OpenAI, or your own code?

Can you log all chunks, all data, raw, as you receive them, direct from the API?

This forum post below demonstrates that the server-sent-event streaming subscription can be split across http chunks (chunking being the transfer encoding method on top of IP packets), or can split across smaller increments. You’ll need to reassemble contents, with a buffer, looking for the \ndata: that initiates a new transmitted OpenAI object, if you aren’t using a higher level http/2 library do that.

That is one possibility among many.

Thank you very much for your reply. I did not make any changes and the error message no longer appears. It seems to be a network problem.

1 Like

Also, how can I close this thread, like a github issue?