Rather than use multiple user/assistant message pairs, I am attempting to create a “context summary” in my script, especially removing what I consider to be fluff words and needless explanation from the response. Could I then add that context summary to the next request as part of the system message? Or is the system message really just for generic things like “You are a helpful assistant” as mentioned in the docs?
If the answer is no, is there any real point to the system part of the message?
Additionally, will the token savings I get from creating a context summary rather than using the full user/assistant messages actually hinder the LLM’s ability to give a best response?
Yeah its completely doable. The system response is not a one shot and done thing. You can change it at will and technically it is submitted every time as part of your prompt.
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
So, at certain intervals, you can summarize the message and append it to the system content. Maybe make a system template with a section for the summary.
You are a helpful assistant.
[History Summary]
The user asked questions about dynamic system messages and history summarization.
Yeah, in the beginning it used to be pretty bad. I don’t know if that was because I sucked at making them or if they have changed it quite a bit. But either way, I have noticed that a properly worded system message will really drive the conversation in the way you want it to.
I made a semi detailed account of a pretty complex one I made a while back here:
Before gpt-3.5 turbo, the system messge would just be good at getting a hint of direction to the LLM and the LLM was prone to not paying much attention to it. One of the major changes that 3.5-turbo introduced and 4 has improved on is the effect the system message has on the LLM’s generation.