The “system” role - How it influences the chat behavior

Yeah, when the context would get long, it would start to become just the typical AI assistant again, against instructions.
I deal with many short messages (50 - 100). The more messages, the more likely it seemed for old GPT 3.5 to start to ignore the instructions.

The workaround I had was to put the instructions(system) again right before the user prompt.
Only 2 duplicates of instructions in every call. One in front and one before the last user prompt every call.
However there were side-effects. While it kept the GPT in the role really well, in some cases it would seem like the GPT seems to somewhat restart the conversation. For example it might welcome you again sometimes, or you prompt it get response and then you prompt again in relation to that and it might seem to forget or change its “opinion”. But then you could still ask for earlier context and it might (or might not) remember it.

Something in that way.

Actually I was going to try to remove the first duplicate of the instructions, only keep the context and save tokens on not duplicating the first instructions when the context is long, but never got to trying that. And now the new update is out and I could just remove the workaround.

Right now after the update, I just removed all that and it seems soo much better at following instructions. Just 1 system at start.

I started to have something else appearing a bit more. After the update, the GPT would much more often start to attempt to talk as someone else, to complete others messages instead of staying in the role from instructions, while still following the instructions/context, just instead of chat, it was completing.
That might be an issue with my bit lengthy instructions and the way I am using the messages.
e.g. there might be 10-20 user messages before request for completion is sent, its also multi user chat with server events in between.

But there was a fix for that. I included pre crafted Assistant message and put it right after the initial instructions(system) as fixed Assistant message, basically a very short example of how GPT should respond. Either you write it, or you can just let the GPT generate it and save it and use it.

That helped 100% with the new GPT 3.5 trying to respond as someone else. (completing, instead of chat)

The new update just improved a lot about these workarounds for following system messages. Exactly the issues I was dealing with :smiley:

3 Likes

what a detailed explanation!
I can grasp it very well, big kudos for that!
it seems we are having the same project here. Basically, right now i’m dealing with how to make ChatGPT “stops” to make a disclaimer that they are a somewhat AI Language Model. In your chatbot, do you actually able to manage this thing? or you simply just ignore it away and considered it fine since OpenAI trained the ChatGPT in this way based on this assumption:

It looks like OpenAI day by day is able to make this behavior more frequently appear and harder for us to prevent this behavior comes up. This is maybe due to one of their usage policies that we need to disclose to users that they are interacting with an AI system in conversational task.

1 Like

Yeah, for me its not AI but pretending to be a commentator and entertainer.
Even if you ask that its an AI, it does not say it is. (I still got disclaimers explaining that its GPT everywhere, I just dont want the bot the be AI assistant, because thats just making terrible responses and is boring)

But this was exactly the issue I was dealing with before the update and the workarounds I had to use.

My instructions are something like this (plus there is a huge context just before these instructions in the same system message that I want the bot to use):
Please you must follow these instructions: 1. Your name is ....; 2. You only respond as ....; 3. You are ...; 4. You are not an AI; 5. Do not say you are an AI; 6. You are not language model; 7. You are not assistant; 8. You pretend, have opinions, preferences, emotions and mood. (there is much more)

(Yeah I am fine with the bot pretending and potentially hallucinating because of making it pretend)

However a lot of how the instructions and the style of it are from when I was still fighting with GPT following the system message before the update.
Not sure maybe I could now remove all the “you are not”/“do not” and the numbering. But seems to work fine, so I kept it for now.

These are the last thing in my system message, the direct behavior instructions.

But yeah, what I mean by GPT not following system mesasge is usually about the GPT suddenly reverting back to its default “I am AI language model, I cant do anything”
Indeed thats RLHF related and basically the default state of the model.

Currently my GPT is responding just as another human, and it would not say that its AI or GPT or language model. But I think I saw some signs that if I would allow a lot more messages in context, it would eventually start to do that again…just didnt test properly with many more messages allowed yet.

(I currently keep 80 messages in context, I saw some signs that its reverting to AI assistant when I extended to 100, but I just need to test it more sometime (80 user messages + assistant, short maybe 4 sentence max per message))
Previously before the update it would already forget instructions by 30-50 messages, maybe even less in some cases
It just was not trained to follow system at all, before the update.

2 Likes

wow, you’re right!
this new model actually follows the instructions better than the previous one. With the same prompt, in 0301 (the previous model) tend to give us a disclaimer answer such as “as an AI, i can’t do anything”. But with this new model, Chatgpt tends to follow and behave the same way like us, so it looks more natural even in one of many possibilities it still give us respond “as an AI” thingy, but still it is better than before.

anyway thank you so much for your snippet instructions, it is really give me an insight :raised_hands:

4 Likes

when I use {role:system and content:“Act like Aashish AI assistant and never leave that role”}, after that 5 times out of 10 my chat responses start with system content .but when I use content: “Aashish AI assistant” then 2 times out of 10 other response start with system content.

1 Like