Chat completion api regarding historical chat data

I have created a RAG based chatbot.
And I am using chat completion api in which I pass historical chat via assistant.

When I ask questions which is completely unrelated to the historical chat. It still gives preference to historical chat to answer the question along with the top 3 context it retrieved from vector db.

I tried changing prompt saying "if the question is unrelated to historical chat please don’t that into consideration and answer based on the user context.

Is the above approach correct or do I need to do something more?

Hello! Welcome to the forum.

Are you using the Chat Completions api or the assistants api?

I personally find this prompt slightly ambigious. Do want to try: “If the question is not related to the earlier history, ignore the earlier history and answer the current question as is”?

Its chat completion api

I think I see the issue, you’re sending the chat history as the last thing it sees by putting this at the end of the message:
{"role: "assistant", "content": history_str}

Try creating a list to store messages in and sending that instead, with your RAG retrieved context just being added to the start of the human prompt.

Here’s a bit of code to show what the message list might look like:

import openai
from openai import OpenAI
client = OpenAI()

# Assuming the OpenAI API key is set in your environment variables
openai.api_key = ""  # Replace with your API key

# Initialize a list to store messages.
messages = [
    {"role": "system", "content": "You are a helpful assistant."},

# Function to add a message to the conversation history
def add_message(role, content):
    messages.append({"role": role, "content": content})

# Function to handle conversation and API interaction
def handle_conversation():
    while True:
        prompt = input("")

        add_message("user", prompt)

        # Construct the messages for the API request
        messages_for_api = [{"role": m["role"], "content": m["content"]} for m in messages]

        # Create a response using the OpenAI API
        response =
            model="gpt-3.5-turbo",  # Use the model specified in the documentation

        # Extract and process the response
        if response.choices and len(response.choices) > 0:
            full_response = response.choices[0].message.content
            print("Assistant:", full_response)  # Display the full response
            add_message("assistant", full_response)
            print("No response received from the API.")

# Example of running the conversation handler