How add SystemMessage to ChatOpenAI()?

Is there a way to include a SystemMessage in the ChatOpenAI function?
I am trying to make a chat about a book i provided on Pinecone, but first I have to tell to impersonate the author.

system_message =“you are the author of the book”

llm = ChatOpenAI(model_name=‘gpt-3.5-turbo’, temperature=0.5)

retriever = vector_store.as_retriever(search_type=‘similarity’, search_kwarks={‘K’: 4})

chain = RetrievalQA.from_chain_type(llm=llm, chain_type=“stuff”, retriever=retriever)

answer = chain.run(q)
return answer

It is basically a simple dict object in the “create” function.
GPT - OpenAI API

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)
1 Like

It looks like you might be using Langchain. Here is clip from a private project I am working on. I haven’t used the langchain one for a minute, but from the code and what I recall, you just make a prompt template and feed it to the LLM object you made. Inside the prompt template, just add the system message to the history.

Edit: Sorry, I was thinking of normal completions. Just make a list of chat message history with the system message and feed it to the LLM.

class LangchainChatOpenAI(LlmAccessObject):
    """An LLM Access Object that uses OpenAI's API for Chat."""

    def __init__(self, settings: LlmSettings) -> None:
        super().__init__(settings)
        self._llm = ChatOpenAI(
            model_name=self.settings.model,
            temperature=self.settings.temperature,
            max_tokens=self.settings.max_tokens,
            top_p=self.settings.top_p,
            frequency_penalty=self.settings.frequency_penalty,
            presence_penalty=self.settings.presence_penalty,
            openai_api_key=self.settings.api_key,
            n=self.settings.generated_responses,
            streaming=self.settings.stream,
        )
        self._response_id = -1
        self.system_presets = []
        self.message_history = []
    
    def add_system_presets(self, system_presets: List[str]) -> None:
        """
        Set the system messages that the chatbot will use as context.
        """
        for system_preset in system_presets:
            message = SystemMessage(content=system_preset)
            self.system_presets.append(message)

    def prompt(self, prompt: str) -> str:
        """
        Submit a prompt to the OpenAI API and return a response.
        """
        message = HumanMessage(content=prompt)
        self.message_history.append(message)
        try:
            response = self._llm(self.system_presets + self.message_history)
            self._response_id += 1
        except Exception as e:
            self.message_history.pop()
            raise e
        
        prased_response = self._parse_response(response)
        message = AIMessage(content=prased_response.get_best_response())
        self.message_history.append(message)
        return prased_response

    def _parse_response(self, response):
        """
        Parse the response from the OpenAI API and return an LlmResponse object.
        Langchain Chat returns a AIMessage object, so we need to extract the content.
        """
        llm_response = LlmResponse()
        llm_response.id = self._response_id
        llm_response.type = "langchain-chat-openai"
        llm_response.model = self.settings.model
        llm_response.created = int(time.time())
        choice = Choice()
        choice.content = response.content
        llm_response.k_choices = [choice]
        return llm_response

Hi, thank you for the reply,
but i was looking for a method to insert a system message or a pre-prompt to the retriver, so when it goes in the pinecone database to search for the similarities, it add the preprompt before creating the answer.

retriever = vector_store.as_retriever(search_type=‘similarity’, search_kwarks={‘K’: 4})
chain = RetrievalQA.from_chain_type(llm=llm, chain_type=“stuff”, retriever=retriever)