I am using assistant API which uses GPT-4-turbo which was working fine from last 3 months, suddenly from last 2 days the output inaccuracy is on very higher end and nothing has changed from coding perspective. Is there any particular internal update/upgrade that did it? Last time it happened it get’s listed on status.openai.com page as Major/Minor outrage: “Elevated errors”, but this time nothing listed there as well till now.
as someone who’s been doing this for over two years now, it’s very clear they do internal things to screw with the models and tell no one about it or be transparent about it so yeah this stuff will keep happening until they decide to actually use a stable model.
You can use models that are marked with a date, apparently they dont change those. But calling just like gpt-4o, yeah they’ll probably mess with something and not tell you about it
Late response, but the error went away on it’s own, an internal fix must have been deployed. Often new releases(private /public) tends to break them now they are bit better and tend to break less.