Less Instruction Following in Recent Update?

Hello,

After a recent update, my GPT has not always consistently followed my instructions. I ask GPT to give three key points, but GPT provides 1 or 2 key points or nothing most of the time.

In the previous version, it almost followed my instructions, only in small cases when it did not follow instructions. I already use “must” in the prompt.

Do any of you have the same issue or have any suggestions?

I confirm that my GPT has the same problem. Although for me it’s quire more severe, in the sense that it ignores the instructions 95% of the time. In my case I thinks it’s due to the long knowledge files and long custom instructions. The super limited context window completely defeats the purpose of GPTs. (Note: I say 95% because I actually calculated this value hundreds of tests).

As another example of how unusable GPTs have become, today I asked Wolfram GPT (arguably the most sophisticated GPT out there) to use Mathematica to calculate the determinant of a matrix. This is something that could be done super easily a few months back. But this time the GPT sent the command to the Wolfram API with a different matrix from the one I had inputed. It completely modified and made up my request.

Unfortunately you are not alone. The quality of GPT responses has become very unreliable and unpredictable over the last 2-3 months. We’ve wasted a lot of time trying to improve our instructions and data, only to find that it works great for a few hours, then it becomes stupid for a couple of hours, then it works great again, and then back to useless.
I think this is because of lack of compute capacity in the back end, but OpenAI does not publish their utilization % or other metrics that may help us understand what’s going on.

Frankly, I’m quite disappointed, and I wonder how many more months (years?) it will take for this platform to become reliable enough to be usable in the real world, because at this time it is not.

Finding this even more so with the update to 4o. Custom GPTs not following clear step-by-step instructions that worked with GPT-4.

It’s a shame custom GPTs can’t specify what model to use.

Same here! I wonder if OpenAI will ever make ChatGPT better or if it’ll remain useless going forward… :woman_facepalming:t3: