Has anyone noticed that "gpt-3.5-turbo-16k" is behaving poorly in the last week?

It’s been behaving badly for some time now.

I am told that it’s the prompt, not the model. But gpt-4 seems to answer every question gpt-3.5-turbo-16k can’t. Consistently. With the same context. So, I don’t know.