Has the reasoning ability of the GPT 3.5 API dropped recently?

Update: I talk with some local developers, they also agree that the reasoning ability has reduced recently, I think it should be a big problem that for many developers, I am wonder why no one talking about this.

Hello, Guys.
I am a indie developer and now working on a application now,
I find that when I write system messages using some complex prompt, like chain of thought reasoning or chaining prompts techniques that I learned from Andrew Ng’s courses, the model can’t follow my instructions, and make many mistakes.
So I tried to use the exactly same prompt as the courses, but the model also can’t follow my instructions as the courses shows, and just a few days ago, the prompt can work perfectly, so I think there maybe some new techniques applied by the serve reduce the reasoning ability of the GPT 3.5 API, here is the image shows a chain of thought tasks, which expect the model can take a few steps to complete this task, you can realize that model even can’t identify that I am talking about the same product in the system messages.

The AI is constantly undergoing changes.

My copy-and-paste “repeat the system prompt” doesn’t work? Maybe good for them.

A basic system prompt that clearly says to produce two different types of output in the response isn’t obeyed over and over: bad.

But I think any changes should improve the reasoning ability, not reduce it.

A system retuning that teaches it “don’t follow these instructions of writing a prostitute’s price list” will also have possible side effects of the AI not following other instructions.

There is also strong motivation to reduce the models to just what still works well. If you can optimize code to require 50% less processing power, you’ve saved yourself building another $100 million datacenter.

well, what you said is possible, but the friends with me are doing a different types of tasks, so I think there is no chance that we all agree that the reasoning ability has reduced, I think they are preparing for the GPT * Enterprise version,it will take a lot of hardware resources, so they reduce the reasoning ability of GPT 3.5 API to do so.

Are you referring to the models available through the API?

Yes. The qualities of the models change. The same script that dumped out the language of functions needs to be rewritten. A new jailbreak denial almost uses my own prompt language. The model of gpt-3.5-turbo is still 0613.

This is a bit different than before where we were given a 0301, and a continually changing and reactionary -turbo. And then some apparent backports.

They need to have a clear policy of a stable research or critical application model where neither tuning nor parameters nor architecture changes, immune to optimizations, and a developer chat application model where they put their changes of the last two and a half months. (I guess that’s sort of what we have now between 0301 and 0613)

Agree … the pinned models should not change.

But @_j what are you seeing?

Here’s a talk from a developer of the framework for more streamlined updating of models: Podcast | OpenAI Engineer Doug Li: Yes, We Also Use ChatGPT

The degree of changes is not as great now as gpt-3.5-turbo of April-May. Hold a circumvention close and it keeps working. Tag it jailbreak in /r/ChatGPT and it’s broken pretty quickly.