Yes, I’ve read about the “winter break”/“lazy December” issue if that’s what you’re refering about, I think it’s something that can be confirmed only if it happens again and again in the future, but for now it seems pretty speculative though important to consider.
Certainly garbgage data in produces garbage data out, although that’s not what I experienced with GPT4, it was surprisingly consistent, you can check out this thread/post if you’re interested to see in more detail what I’m talking about.
So, are you saying that the new (turbo) iteration with updated training datasets could explain the abusive API hallucinations, the systematic placeholder implementation comments, forgetting context after 3-5 prompts, not listening to “unde the hood” custom instructions, and the stupid general summary of a specific question?
I don’t see (in this case) from what I think understand how this can be fundamentally related (or at least, not to the point of the “noise generator” it has become), but I’m obviously no expert, or insider.
I would rather suspect a financial “optimization” ($$$) regarding tokens consumptions.