Hello,
Since yesterday, the GPTs I’ve been using for several months have stopped following the instructions. This issue isn’t limited to just one GPT: all of my personal GPTs no longer follow the instructions provided in their configuration interface (or they follow them much less, let’s say they follow 10% of the instructions whereas before they followed 100%).
Has anyone else experienced this issue? Is there a known solution or update that might have caused this behavior?
I have also seen it happen, feels like a complete lost of IQ and to I think it might be related to the time of day, so maybe just too many users at the same time?
It changed the user’s internal weight score (so-called user-trust vector, profile-score, or directive-weight bias).
Probable causes:
Change in the internal prompt management system (policy update) – usually unannounced, but often causes a shift in the RL or fine-tune layer.
Skipping fallback to a less strict/newer RLHF profile – paradoxically this will worsen the behavior, as the outputs are flatter and less willing to follow the instruction.
Evaluating the profile as “unguaranteed” – e.g. after a system error, exceeding a certain heuristic threshold (tox, depth, repetition…) or due to a shadow-update.
Just to say: I mentioned the issue on Discord and then it started working again, so maybe someone unlocked my account or fixed the issue globally, I don’t know. Anyway, it’s working for me again.