RetrievalQAWithSourcesChain Hallucination

I am trying to develop an interactive chatbot based on a knowledge base. What I have done for now is that i constructed a Faiss vector based data from the text files I scraped on a website. Next, using langchain ChatOpenAI and RetrievalQAWithSourcesChain, i have built a simple chatbot with memory using langchain prompt tools (SystemMessagePromptTemplate, HumanMessagePromptTemplate and ChatPromptTemplate).

def process_query(query, messages, vector_store, llm):
    messages.append(HumanMessagePromptTemplate.from_template(query))
    prompt = ChatPromptTemplate.from_messages(messages)

    chain_type_kwargs = {"prompt": prompt}
    chain = RetrievalQAWithSourcesChain.from_chain_type(
        llm=llm,
        chain_type="stuff",
        retriever=vector_store.as_retriever(search_type = "similarity", 
                                            search_kwargs = {"k" : 1}),
        return_source_documents=True,
        chain_type_kwargs=chain_type_kwargs
    )

    result = chain(query)
    print_result(result, query)

    # Add the answer to the conversation context
    messages.append(SystemMessagePromptTemplate.from_template(result['answer']))

    return messages, result['answer']

 
system_template=""" You are a AI Q&A chatbot repsonsable for assisting the users of a website named that offers a fitness scheduling software.
You have been trained on a very specific knowledge base that presents and explains all services.
During answering, if there is no similar Q&A in the knowledge base, DO NOT TRY to make up an answer and answer EXACTLY by the following : "I'm really sorry but I don't have the answer to that question. Would you like me to raise a ticket for you and one of our humans will get back to you as soon as possible?". 
Please do not mention the sources in the during answering.
Once again, if you lack knowledge of the answer and you have doubts about its accuracy, answer EXACTLY by the following : "I'm really sorry but I don't have the answer to that question. Would you like me to raise a ticket for you and one of our humans will get back to you as soon as possible?".
----------------
{summaries}"""  
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, max_tokens=128, 
                 openai_api_key=openai_api_key)  

messages = [
        SystemMessagePromptTemplate.from_template(system_template),
    ]


    while True:
        user_input = input("Your question: ")

        if user_input.lower() == 'exit':
            print("Thank you for using the chatbot. Have a great day!")
            break

        messages, answer = process_query(user_input, messages, vector_store ,llm)```

The problem here is that I am facing a model Hallucination, i can ask some questions and the model will still answer even if i mentionned in the SystemMessageTemplate that it shouldn't if it is not sure. It can even answer while there is no such subject in me training knowledge Base.

Can you please help ?

Instead of assigning an identity to the prompt like “AI assistance responsible for…”, simply describe the task. This helps avoid creating a fictional scenario and prevents the model from generating a so-called hallucination.

Try this

Answer the question based on the context below, and if the question can't be answered based on the context, say "I don't know" 

Context: {context}

---

Question: {question}

Answer:",