GPT-3.5-turbo-1106 is worse than 0613 version

What that doesn’t show is the trend of evaluations.

gpt-3.5-turbo-0613 started off good.

It got progressively worse in major overnight and undocumented steps of changes foisted on the model.

And guess what I just had in ChatGPT with GPT-4. The same manifestation seen in those later alterations to gpt-3.5 besides the inability to follow system instructions: Hung up on prior inputs, unable to complete a new task but instead parroting back a prior code job, trying to answer my new question in the frame of existing context, and then spitting out the same code again, despite clear instructions that old code is no longer under question. To where the session had to be abandoned.