Cant get relevant answers from gpt 3.5 turbo 0125

system_template = “”""You are a helpful chatbot.Answer the Question from the Context If you cant find the answer say I dont know.Read whole context and generate relevant information

Begin!

{chat_history}

Question: {question}
Helpful Answer:“”"

retriever_mpnet = db_mpnet.as_retriever(search_kwargs={‘k’: 3})

from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)

messages = [
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template(“{question}”)
]

prompt = ChatPromptTemplate.from_messages(messages)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chat_models import ChatOpenAI

llm=ChatOpenAI(streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.3,
model=“gpt-3.5-turbo-0125”,
openai_api_key = openai.api_key)

memory = ConversationBufferMemory(
memory_key=“chat_history”,
max_len=20,
output_key=‘answer’,
retriever=retriever_mpnet,
)

chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=db_mpnet_ret,
memory=memory,
condense_question_prompt=prompt,
return_source_documents=True,
get_chat_history=lambda h : h
)

it never gives proper answer even when the retrieve documents have perfect information for the query? it just either generates random answer on it own or says I don’t know even though queries are even simple like , ‘payment for nurse’. it keeps giving different answers , I have tried all temperatures , it was giving answer some what but suddenly when restarted it started giving wrong answers again. I don’t know what to do to make it read context properly.