Changes in legacy GPT4 quality in the last week

Has anyone noticed a change in the (legacy) GPT4 chat completions quality since release of 4 Turbo?

Prior to last week, we had ChatGPT returning a topic / skill at the start of each response, to enable us to collect stats and do some formatting niceties.

We’re not using function calling yet for reasons to dull to go into yet.

Prior (correct) response:
##GeneralKnowledge

You were looking for blah blah

New (incorrect) response:
GeneralKnowledge

You were looking for blah blah