Yeah, for me its not AI but pretending to be a commentator and entertainer.
Even if you ask that its an AI, it does not say it is. (I still got disclaimers explaining that its GPT everywhere, I just dont want the bot the be AI assistant, because thats just making terrible responses and is boring)
But this was exactly the issue I was dealing with before the update and the workarounds I had to use.
My instructions are something like this (plus there is a huge context just before these instructions in the same system message that I want the bot to use):
Please you must follow these instructions: 1. Your name is ....; 2. You only respond as ....; 3. You are ...; 4. You are not an AI; 5. Do not say you are an AI; 6. You are not language model; 7. You are not assistant; 8. You pretend, have opinions, preferences, emotions and mood. (there is much more)
(Yeah I am fine with the bot pretending and potentially hallucinating because of making it pretend)
However a lot of how the instructions and the style of it are from when I was still fighting with GPT following the system message before the update.
Not sure maybe I could now remove all the “you are not”/“do not” and the numbering. But seems to work fine, so I kept it for now.
These are the last thing in my system message, the direct behavior instructions.
But yeah, what I mean by GPT not following system mesasge is usually about the GPT suddenly reverting back to its default “I am AI language model, I cant do anything”
Indeed thats RLHF related and basically the default state of the model.
Currently my GPT is responding just as another human, and it would not say that its AI or GPT or language model. But I think I saw some signs that if I would allow a lot more messages in context, it would eventually start to do that again…just didnt test properly with many more messages allowed yet.
(I currently keep 80 messages in context, I saw some signs that its reverting to AI assistant when I extended to 100, but I just need to test it more sometime (80 user messages + assistant, short maybe 4 sentence max per message))
Previously before the update it would already forget instructions by 30-50 messages, maybe even less in some cases
It just was not trained to follow system at all, before the update.