Bad performance with the same prompt after a weekend

I have been suffering from some strange problems with gpt-4o mini.
After a weekend, the results of chat API are different from what they looks like last Friday. The model is outputing some unexpected symbols like &nbsp, semicolon, greater than sign, and so on. And I may get empty string in some fields of the response model. The quality of generated results are not as good as they were last week, too. I have also met some connection errors this time, which never happened before. I’m wondering if there are some changes happened to the model? Does anyone else have the same issue?

Welcome to the forum!

This is common type question on this forum. Yes the models are updated from time to time with no notice and no information on what was done. This not to be confused when posted updates to a model are noted.

At the bottom of each topic in this forum are related topics, you can read about them and may find some ideas of how to mitigate this problem.

1 Like