How add SystemMessage to ChatOpenAI()?

Is there a way to include a SystemMessage in the ChatOpenAI function?
I am trying to make a chat about a book i provided on Pinecone, but first I have to tell to impersonate the author.

system_message =“you are the author of the book”

llm = ChatOpenAI(model_name=‘gpt-3.5-turbo’, temperature=0.5)

retriever = vector_store.as_retriever(search_type=‘similarity’, search_kwarks={‘K’: 4})

chain = RetrievalQA.from_chain_type(llm=llm, chain_type=“stuff”, retriever=retriever)

answer = chain.run(q)
return answer

It is basically a simple dict object in the “create” function.
GPT - OpenAI API

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)
1 Like

It looks like you might be using Langchain. Here is clip from a private project I am working on. I haven’t used the langchain one for a minute, but from the code and what I recall, you just make a prompt template and feed it to the LLM object you made. Inside the prompt template, just add the system message to the history.

Edit: Sorry, I was thinking of normal completions. Just make a list of chat message history with the system message and feed it to the LLM.

class LangchainChatOpenAI(LlmAccessObject):
    """An LLM Access Object that uses OpenAI's API for Chat."""

    def __init__(self, settings: LlmSettings) -> None:
        super().__init__(settings)
        self._llm = ChatOpenAI(
            model_name=self.settings.model,
            temperature=self.settings.temperature,
            max_tokens=self.settings.max_tokens,
            top_p=self.settings.top_p,
            frequency_penalty=self.settings.frequency_penalty,
            presence_penalty=self.settings.presence_penalty,
            openai_api_key=self.settings.api_key,
            n=self.settings.generated_responses,
            streaming=self.settings.stream,
        )
        self._response_id = -1
        self.system_presets = []
        self.message_history = []
    
    def add_system_presets(self, system_presets: List[str]) -> None:
        """
        Set the system messages that the chatbot will use as context.
        """
        for system_preset in system_presets:
            message = SystemMessage(content=system_preset)
            self.system_presets.append(message)

    def prompt(self, prompt: str) -> str:
        """
        Submit a prompt to the OpenAI API and return a response.
        """
        message = HumanMessage(content=prompt)
        self.message_history.append(message)
        try:
            response = self._llm(self.system_presets + self.message_history)
            self._response_id += 1
        except Exception as e:
            self.message_history.pop()
            raise e
        
        prased_response = self._parse_response(response)
        message = AIMessage(content=prased_response.get_best_response())
        self.message_history.append(message)
        return prased_response

    def _parse_response(self, response):
        """
        Parse the response from the OpenAI API and return an LlmResponse object.
        Langchain Chat returns a AIMessage object, so we need to extract the content.
        """
        llm_response = LlmResponse()
        llm_response.id = self._response_id
        llm_response.type = "langchain-chat-openai"
        llm_response.model = self.settings.model
        llm_response.created = int(time.time())
        choice = Choice()
        choice.content = response.content
        llm_response.k_choices = [choice]
        return llm_response

Hi, thank you for the reply,
but i was looking for a method to insert a system message or a pre-prompt to the retriver, so when it goes in the pinecone database to search for the similarities, it add the preprompt before creating the answer.

retriever = vector_store.as_retriever(search_type=‘similarity’, search_kwarks={‘K’: 4})
chain = RetrievalQA.from_chain_type(llm=llm, chain_type=“stuff”, retriever=retriever)

hey did you find the answer? i’m trying to do the same as you, but with user and assistant messages, i’ve searched the langchain docs but they are not very helpful

Hi!

I’m also interested to see if there is any answer on this?!

I don’t understand your question. If this is the process flow:

user question ----> vector store ----> LLM ----> response

Where, exactly, are you wanting to place a “system message”?

If you are saying that you want to send the system message as part of the cosine similarity search to the vector store, I can’t see why you would ever want to do that.

If you are saying you want to add it to prompt that you send to the LLM once you receive the context documents from the vector store, this is how you would do it in Chat Completion API: How add SystemMessage to ChatOpenAI()? - #2 by codie

If you’re having a problem, you could also try simply constructing your text prompt something like this:

“System Message: Respond to the user question as if you were the author.”
“User Question: What does your book say about the meaning of life?”

My two cents.