Changing prompts to remove references to context

Hello to the OpenAI community and Prompting experts!

Was looking to optimize this prompt.

Here's the context using which you can answer the question at the end. When the answer can be found from the context, ensure the answer contains as much information from the context for the question. If there's not enough information in the context provided, do not use any other information to answer the question. Just say that you don't know, don't try to make up an answer. When answer cannot be found from the context, only say you 'Don't know' verbatim and nothing else.

Note: Some part of this prompt is from the amazing Langchain framework. And the rest of it is my own making.

Despite this prompt, responses like these are generated when answer cannot be found:

  • “The context does not provide information about the reason for eclipses.”

And when answer can be found responses like below are generated:

  • “Phil Newman is the Responsible NASA Official mentioned in the context.”

In both of these cases, what are the ways to remove references to the context. Like “mentioned in the context”, “The context does not provide information about the reason for eclipses.”?

Looking forward to some guidance here!

Hi @systems

You prompt needs to be refined and condensed.

Also what role are you passing the prompt as?

1 Like

Appreciate your response @sps

Hmm, not passing any role as such. Do you mean what role is being passed to OpenAI’s APIs? Currently, Langchain is being used to experiment with this application.

1 Like

I would recommend that you should first read the ChatCompletion docs and experiment in Playground, before using LangChain.

If you are not using gpt-3.5-turbo you should be because it will keep your costs minimum.

2 Likes

Yes, gpt-3.5 is significantly cheaper - it is already being used.

I remember skimming the ChatCompletion docs. Checked it once again now. Looks like a thorough reading might be required.

Did you manage to remove context-related info in your experiments?

1 Like

I haven’t experimented on your specific use-case, but, yes it’s pretty simple.

Here’s a basic example

1 Like

Actually the best way we found to do it.

System prompt: you are a qualified analyst. You always answered carefully to questions checking available informations given to you.

User prompt:
Here are the informations available.

Are there enough data to answer this question with those informations:

Question: xxxxfx

Answer only with Yes, No or Difficult to decide.
No other comments.

If answer is yes.

Then you could send back

The first system prompt, the user prompt and then as system prompt the answer Yes.

And a new user prompt, Answer the question with the information given.

Works perfectly at least for our use cases with gpt_3.5 or gpt_4.0

We are not using langchain just the raw openAi python api.

Sorry for the delayed response @sps - this is due to being away from working on this project. And thanks for sharing this.

Looks like the responses to this prompt, when there is no sufficient context, can be varied, ie: the model can refuse in a variety of ways, therefore deducing if the model refused would be difficult.

@tlunati thanks for your response. Would you be able to add further on how the required context is shared in the prompt? Because, sharing the context as is, in plain English would increase the size of the prompt beyond the permissible limit.

Hi systems

If your context is very long
A trick that works very well
Cut it in pieces of 3000 tokens
Then ask api to extract main informations as 10 bullet points

Then by aggregation of those bullets points
You will have a shortest context

Then when creating prompt

In system in prompt never speaks about context

Ask it to answer the question using the information provided like this

System prompt:

You are …
Your mission is to answer questions using data i provide you
If the answer is not in the data, don’t provide any answer
If the answer is in data give the answer without any comment

The format of your response should be an xml format:

<answer_in_data></answer_in_data> yes if you can answer with data provided, no if you can’t

<the_answer></the_answer> the answer without comment if you can answer, empty if you can’t answer

User_prompt:

Here is the question you should answer:
<the_question>
…
</the_question>

Using the following:
<provided_data>
…
</provided_data>

This prompt should work perfectly with gpt4 and 90% of the time with 3.5

Keep the xml tags
Note that xml tags name are using the same wording I’m using when introducing the mission and personnage
Very important to have good results

You can also (improve a little bit the results by 10% according to our benchmark) ask it to give a <note_on_10> for his answer and a short regarding quality of its answer but it will increase your costs

you will get an xml answer easy to parse
Knowing exactly when you have a correct answer.

Works perfectly for us (we have other tricks but those one should help)

Have a good day

3 Likes

What about chat history ? how you would put it into the suggested structure?

Adding this, Please do not mention the term “context” in your answer. Do not provide extra explanation to your answer. worked for me.