What exactly does a System msg do?

I’m trying to understand what the user, assistant, and system messages are actually doing. I think I understand what the user and assistant messages are doing, but I’m not sure what the system message is doing.

USER - A user msg is sent to the LLM. The LLM retrieves the embedding vector for that msg and finds the best matches in the foundation model.
ASSISTANT - This is addition text that is combined with the USER message. The combined message is sent to the LLM to retrieve the embedding vector for the combined message to find best matches.
SYSTEM - what does this really do? Are these just additional ASSISTANT messages?

Any insight is greatly appreciated.

2 Likes

in theory:

the user messages are messages that the user wrote
the assistant messages those the bot wrote
the system message is a message that the developer wrote to tell the bot how to interpret the conversation. They’re supposed to give instructions that can override the rest of the convo, but they’re not always super reliable depending on what model you’re using.

here’s an example

the system message, the user messages and the assistant messages are joined in an array. that array, the whole thing, is the context that gets sent to model to create an inference: the model response.

it’s pretty similar to the text completion endpoints. the only difference is that instead of big strings, you’re sending and receiving JSON arrays.

hth

2 Likes

Appreciate the response.

Does that mean I would never send an Assistant msg to the server?

I’m using RAG, so would I put the similar context (coming from a vector db similarity results) into a System message (3.5 turbo)? Or would I just put the similarity results into a User message and then append the user’s string?

1 Like

If the Assistant string is combined with the System/User strings to provide context for the model, what happens when the Assistant sends back a wrong answer? Doesn’t that skew the results going forward?

The “assistant” role message is primarily to show the AI what it produced before as a record of the chat, or what you want it to believe that it produced in response to user input.

The AI that has been trained on chat would logically be given the last response it generated, so that the user could say “that’s not what I meant, I meant for you to duplicate the entire schema” and then the AI can not just understand the topic, but also can see its prior output and understand a different result than the one provided is desired.

1 Like

Thank you. Makes sense. The OpenAI docs have a confusing explanation. It just says “assistant” messages are from the model, which is misleading. The obvious question would be “why would I want to send an ‘assistant’ message if that’s from the model?” They should mention it is useful to send back “assistant” messages (and “user” messages) as further context to more accurate completions.

Thanks, but your explanation of System messages are at a very high level (what are instructions when it comes to embeddings and neural nets?). I’m looking for a more technical explanation on how a System message influences the internal neural network so the user (+context window) prompt returns a different completion.

1 Like

Its where you define the role, giving it the instructions on what you want it to do. I like to say roles, and goals… So for example…

'Act as a python developer." or ‘You are an AI assistant and you will answer user provided questions, and output in markdown.’

I guess I’m not making myself very clear. Sorry. I’ve read the OpenAI docs where it says that. I am interested in understanding the impact of the System messages on the underlying neural network (or not). “Instructions” are just an abstraction for us humans. The LLM doesn’t really know about “Instructions”. The System prompt and the prompt engineering (preamble/context) are influencing the underlying foundation model to complete a prompt. My question is “How?”. Is it changing the starting point weights of the NN or something else?

1 Like

The AI has been fine-tuned on the ChatML containers, which enclose the specific role names and content of messages in special tokens.

Then by methods similar to fine-tuning, the AI is trained on much more examples of it responding and modifying its behavior based on “you are a princess AI” type system identities and system instructions that can go in that role for shaping behavior. The instructions there are more likely to be believed as authorized, while the user role is trained on data inputs or instructions suitable to come from a user, while being denied things like making permanent changes to what was listed in a system role.

Another example would be functions - also placed in the system role and the AI trained on a diverse variety to infer what to output when receiving new specifications.

Machine learning reward models on producing the token sequences of training data are used to encourage them. You can dive right into the PPO paper of OpenAI, or just look at how to retrain gpt-2.

In theory. In the past.

gpt-3.5-turbo system messages are now more like the symptoms of "we took out all the training on system messages and the only thing trained on is “you are ChatGPT”, and then since people were putting jailbreaks into custom instructions (also placed in system messages), we put a lot more effort into ignoring anything an API developer might actually want done in system messages. sorry.

3 Likes

Thank you for the detailed answer! Much appreciated.

That seems to imply that for gpt-3.5-turbo, system-style messages are effectively treated exactly the same as user messages.
And for other models, system messages are still treated similar to fine-tuning.

Is this correct?

As an educator, I want to get this info correct. Greatly appreciated the help.

1 Like

At least as recently as the 0613 models, I found gpt-3.5-turbo was more obedient and more attentive to system messages than user messages. In particular, I had better luck getting the model to obey multi-part instructions when it was in system message. Has this changed with 1106?

As a bit of context, with text completion model prompting, it was more common to send portions of a pre-defined conversation to an LLM as part of your prompt because the model behavior (pre “Chat” models) was focused on “completing” the pattern. You could get more consistent results by providing the first 2-3 parts of the exchange, then asking for the model to produce the last part.

You can still prompt a Chat model this way with fabricated conversation, although they are better at understanding intent, and generally don’t need it as much. But assistant messages are still relevant because they inform the model as to what has been previously stated by each speaker.

My highly non-technical mental model for this is:

  • Assistant Message: Responses from the model. Informs future responses, but is least authoritative.
  • User Message: Input from the User - medium authoritative, can steer the model contrary to Assistant Messages, unless asking for some answer prohibited by model guardrails, or contrary to System Message.
  • System Message - Most authoritative. Model is most attentive to, and tries hardest to obey.

of course, there are many ways to “jailbreak” a model into acting in a manner contrary to System Message, but it will try at least initially.

1 Like