Hello, I feel this is a silly question so please forgive if it is -
I am building a chatbot, and using a redis cache to store previous messages.
I am currently “passing it back” to the API by:
- manually concatenating previous messages into a formatted string like:
previous_convo = """
CHAT HISTORY
--------
USER
message1
ASSISTANT
message2
USER
message3
ASSISTANT
message4
"""
and then doing a:
prompt = f"""
Respond to the users current message:
{previous_convo}
CURRENT QUESTION:
{current_question}
"""
And then using the completions API like this:
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
)
Note: there is a single “user” message containing the entire chat history in the string itself
I do this for every message in the conversation! So every call to the LLM has a single dict for the user message.
Option 2 I thought of would be something like using a list of dicts of previous messages:
previous_messages = [
{"role": "user", "content": messge1},
{"role": "assistant", "content": messge2},
{"role": "user", "content": messge3},
{"role": "assistant", "content": messge4},
]
and then using the API like this:
from openai import OpenAI
client = OpenAI()
system_prompt = {"role": "system", "content": "You are a helpful assistant."},
messages = [system_prompt]
messages.extend(previous_messages)
messages.append("role": "user", "content": "message5 content"})
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
)
So in this case the full request would look like:
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "message 1 content."},
{"role": "assistant", "content": "message 2 content"},
{"role": "user", "content": "message 3 content"},
{"role": "assistant", "content": "message 4 content."},
{"role": "user", "content": "message 5 content."}
],
)
Context length issues aside (assume short conversations):
Question: how should I be “communicating” the previous messages back to the LLM?