Inconsistent assistant outputs since last two day

I am using assistant API which uses GPT-4-turbo which was working fine from last 3 months, suddenly from last 2 days the output inaccuracy is on very higher end and nothing has changed from coding perspective. Is there any particular internal update/upgrade that did it? Last time it happened it get’s listed on page as Major/Minor outrage: “Elevated errors”, but this time nothing listed there as well till now.

as someone who’s been doing this for over two years now, it’s very clear they do internal things to screw with the models and tell no one about it or be transparent about it so yeah this stuff will keep happening until they decide to actually use a stable model.

You can use models that are marked with a date, apparently they dont change those. But calling just like gpt-4o, yeah they’ll probably mess with something and not tell you about it