Yeah, when the context would get long, it would start to become just the typical AI assistant again, against instructions.
I deal with many short messages (50 - 100). The more messages, the more likely it seemed for old GPT 3.5 to start to ignore the instructions.
The workaround I had was to put the instructions(system) again right before the user prompt.
Only 2 duplicates of instructions in every call. One in front and one before the last user prompt every call.
However there were side-effects. While it kept the GPT in the role really well, in some cases it would seem like the GPT seems to somewhat restart the conversation. For example it might welcome you again sometimes, or you prompt it get response and then you prompt again in relation to that and it might seem to forget or change its “opinion”. But then you could still ask for earlier context and it might (or might not) remember it.
Something in that way.
Actually I was going to try to remove the first duplicate of the instructions, only keep the context and save tokens on not duplicating the first instructions when the context is long, but never got to trying that. And now the new update is out and I could just remove the workaround.
Right now after the update, I just removed all that and it seems soo much better at following instructions. Just 1 system at start.
I started to have something else appearing a bit more. After the update, the GPT would much more often start to attempt to talk as someone else, to complete others messages instead of staying in the role from instructions, while still following the instructions/context, just instead of chat, it was completing.
That might be an issue with my bit lengthy instructions and the way I am using the messages.
e.g. there might be 10-20 user messages before request for completion is sent, its also multi user chat with server events in between.
But there was a fix for that. I included pre crafted Assistant message and put it right after the initial instructions(system) as fixed Assistant message, basically a very short example of how GPT should respond. Either you write it, or you can just let the GPT generate it and save it and use it.
That helped 100% with the new GPT 3.5 trying to respond as someone else. (completing, instead of chat)
The new update just improved a lot about these workarounds for following system messages. Exactly the issues I was dealing with