ChatGPT is ignoring system messages

I’m providing a list of “Potentially useful notes” in a system message. ChatGPT is not referencing these in its responses, even when the question, with the exact same text, is explicitly stated and a clear answer provided in the notes. If I add 'Refer to your system message" to the user queries, it seems to work fine, but that’s an awful hack I shouldn’t have to add. Any suggestions?


My operating understanding of the attention mechanism is that it’s basically a super fancy autoregression.

It looks at signals in the token sequence that are conceptually similar to what’s currently at hand (i.e. at the end of the text), and pools that information to distill the next most likely token out of that.

So the best way to get the model to pay attention to the right thing is to ensure there is a high signal to noise ratio in your context.

here are a couple of options:

  1. reducing clutter (reducing noise)
  2. using bribing, all caps, delimiters, or similar methods to maintain attention (increase source amplitude)
  3. forcing a chain of thought process that allows the model to discover the correct instruction (improve local signal) before processing.
1 Like

I am assuming your are referring to ChatGPT and with system message you mean the custom instructions.

That’s how I’ve been doing it for some time now. My impression is that the custom instructions are passed in only once at the start of the conversation instead of being repeated at each conversational turn, but I could be wrong.

During longer conversations with ChatGPT it helps to pass in the custom instructions repeatedly via user message and/or add a reminder to actually use the guidance provided.

There is something else you can try and that’s adapting the user message to match the custom instructions in some way. If you can bring the model to recognize a pattern between both you can influence the behavior according to your needs.