Can I pass chat context (summary) via the system message?

Rather than use multiple user/assistant message pairs, I am attempting to create a “context summary” in my script, especially removing what I consider to be fluff words and needless explanation from the response. Could I then add that context summary to the next request as part of the system message? Or is the system message really just for generic things like “You are a helpful assistant” as mentioned in the docs?

If the answer is no, is there any real point to the system part of the message?

Additionally, will the token savings I get from creating a context summary rather than using the full user/assistant messages actually hinder the LLM’s ability to give a best response?

1 Like

Yeah its completely doable. The system response is not a one shot and done thing. You can change it at will and technically it is submitted every time as part of your prompt.

import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

So, at certain intervals, you can summarize the message and append it to the system content. Maybe make a system template with a section for the summary.

You are a helpful assistant.

[History Summary]
The user asked questions about dynamic system messages and history summarization.
2 Likes

Thanks! I do send the system message each time but it’s very generic atm.

So the system part has just as much impact on the context of LLM’s response as user/assistant pairs do?

1 Like

Yeah, in the beginning it used to be pretty bad. I don’t know if that was because I sucked at making them or if they have changed it quite a bit. But either way, I have noticed that a properly worded system message will really drive the conversation in the way you want it to.

I made a semi detailed account of a pretty complex one I made a while back here:

How to prevent ChatGPT from answering questions that are outside the scope of the provided context in the SYSTEM role message? - API - OpenAI Developer Forum

And then naturally, you can play around with prompt crafting here first before you commit it to code.

Playground - OpenAI API

1 Like

One last thing. Not too long ago @EricGT posted this link.

At a high level it goes over some more advanced techniques to build some pretty strong agents.

The thread he posted it in is also a good thread to watch.

Foundational must read GPT/LLM papers - Community - OpenAI Developer Forum

1 Like

Before gpt-3.5 turbo, the system messge would just be good at getting a hint of direction to the LLM and the LLM was prone to not paying much attention to it. One of the major changes that 3.5-turbo introduced and 4 has improved on is the effect the system message has on the LLM’s generation.

2 Likes