Gpt-3.5 / 4 model documentation Nov 7 2023 still has inaccuracies about "snapshots"

Things that worked, don’t work.

Ugh, if this forum was hard to search, my propensity for screenshots makes it worse…

First: what model writes the titles for ChatGPT? The same one that has to follow developer’s system instructions?

That weekend was the first big hit where I discover that I can’t write the same way as what I know about the model’s behavior had worked. And here, others (although these anecdotes are certainly not a “daily benchmark thread and perplexity dump”)


This is the kind of playground preset that I could have dropped onto the function calling 3.5 model at introduction, written with absurd metacode that the AI loved, written for the forum. I find then I have to fall back to 0301 for success.

After OpenAI’s snafu-ing 0301 a month ago by filling inputs with extra tokens, it never returned and cannot give those same answers.

https://platform.openai.com/playground/p/ChjHltqLWSywsL1MY2GgC0hC?model=gpt-3.5-turbo-0301

in the playground preset are not multi-shot, but the actual AI outputs. I’ll let you progressively erase some of the answers and try it on our handful of models today.

A lot of my chagrin is also at ChatGPT, so it’s hard to separate some of the frustration. The same 3.5 0-shot that took my two functions and gave me back drop-in altered two functions now wraps it in import statements and a class and init and made up variables and main — just because. (yes, I did that to an old chat just a few days ago)

This forum two weeks ago is again “what did they do??”

Thanks for at least hearing me out.


Correction: -0301 wasn’t directly and regularly updated. That one was a snapshot, while from that date, the gpt-3.5-turbo model diverged from it, with the updates of being the live model. That gpt-3.5-turbo was abandoned to nowhere with the switchover at the end of June to the “pointer method” pointed at -0613.