Forcing use of context information and suppressing everything else

I have a gradio based chatbot that’s fed context information via a json file.
I am experiencing chatgpt using information that isn’t in the context information (not even close) and generally making stuff up in order to be “helpful”.

Is there a way to force it to ONLY use the context information?

Specify in the prompt, or system instruction

Answer the question based on the context below, and if the question can't be answered based on the context, say "I don't know" 

Context: {context}

---

Question: {question}

Answer:",
2 Likes

Many thanks Kevin6, that really worked and it’s stopped running off in all manner of strange directions

Are there any other commands in addition to Context, Question etc ?

Context or Question are not a command but words, it makes the model understood what are the desire output for format, in which must be followed.

for example, you can write text instead of context

Answer the question based on the text below, and if the question can't be answered based on the context, say "I don't know" 

Text: {your text here }

---
1 - {your question can be here}

I created small guide about formatting prompts , you can read it here How to Format Prompts for OpenAI API - Best Practices 🚀 Maila Recipes for LLMs

I’ve been experimenting with using (1) a System message to provide instructions; (2) several Assistant messages to provide contexts; and (3) a User message to provide the question. I also prefix the User message saying to only use the contexts provided in the Assistant messages.

It seems to be working when I am explicit enough in the System message. But I wonder if this is a Bad Idea or an appropriate use of the System/Assistant/User messages.

Here are the chat completion messages I’m sending:

System: “You are legumebot. You answer questions using only information provided in the assistant messages. If the provided information is not sufficient to provide an answer, reply: ‘The provided abstracts do not provide an answer to your question.’”

Assistant: [Several messages, each containing an abstract from a legume study (found using Pinecone embedding search) and nothing else.]

User: “Respond to the following prompt based only on the content in the assistant chat messages:\n\n{user question}”

I’d be interested to hear what others think of this prompt practice!

I have been asking all sorts of non-relevant items to my bot like tell me a joke, what is the president’s first name, when were the beatles a band, etc. I kept getting random additions to my specific_message like I’m sorry. I cannot provide a joke as it is not a relevant question. Please rephrase the question. So I kept fine-tuning / tightening the prompt and came up with the solution below with a clean up afterwards. It has worked so far. No guarantees though :slight_smile:

hope this helps

specific_message = "Please rephrase the question"
prompt_template = f"""You are a support resource that answers questions about X and the integration of X. If the question is not about X, how to use X, or cannot be answered based on the context, return the specific message saying "{specific_message}", do not make up an answer.
        {{context}}
        Question: {{question}}
        Answer:"""
	
	
result = qa.run(input_text)
response = result

if specific_message in result:
	response = specific_message
else:
	response = result