Changing prompts to remove references to context

Hello to the OpenAI community and Prompting experts!

Was looking to optimize this prompt.

Here's the context using which you can answer the question at the end. When the answer can be found from the context, ensure the answer contains as much information from the context for the question. If there's not enough information in the context provided, do not use any other information to answer the question. Just say that you don't know, don't try to make up an answer. When answer cannot be found from the context, only say you 'Don't know' verbatim and nothing else.

Note: Some part of this prompt is from the amazing Langchain framework. And the rest of it is my own making.

Despite this prompt, responses like these are generated when answer cannot be found:

  • “The context does not provide information about the reason for eclipses.”

And when answer can be found responses like below are generated:

  • “Phil Newman is the Responsible NASA Official mentioned in the context.”

In both of these cases, what are the ways to remove references to the context. Like “mentioned in the context”, “The context does not provide information about the reason for eclipses.”?

Looking forward to some guidance here!

Hi @systems

You prompt needs to be refined and condensed.

Also what role are you passing the prompt as?

1 Like

Appreciate your response @sps

Hmm, not passing any role as such. Do you mean what role is being passed to OpenAI’s APIs? Currently, Langchain is being used to experiment with this application.

I would recommend that you should first read the ChatCompletion docs and experiment in Playground, before using LangChain.

If you are not using gpt-3.5-turbo you should be because it will keep your costs minimum.

1 Like

Yes, gpt-3.5 is significantly cheaper - it is already being used.

I remember skimming the ChatCompletion docs. Checked it once again now. Looks like a thorough reading might be required.

Did you manage to remove context-related info in your experiments?

1 Like

I haven’t experimented on your specific use-case, but, yes it’s pretty simple.

Here’s a basic example

1 Like

Actually the best way we found to do it.

System prompt: you are a qualified analyst. You always answered carefully to questions checking available informations given to you.

User prompt:
Here are the informations available.

Are there enough data to answer this question with those informations:

Question: xxxxfx

Answer only with Yes, No or Difficult to decide.
No other comments.

If answer is yes.

Then you could send back

The first system prompt, the user prompt and then as system prompt the answer Yes.

And a new user prompt, Answer the question with the information given.

Works perfectly at least for our use cases with gpt_3.5 or gpt_4.0

We are not using langchain just the raw openAi python api.

Sorry for the delayed response @sps - this is due to being away from working on this project. And thanks for sharing this.

Looks like the responses to this prompt, when there is no sufficient context, can be varied, ie: the model can refuse in a variety of ways, therefore deducing if the model refused would be difficult.

@tlunati thanks for your response. Would you be able to add further on how the required context is shared in the prompt? Because, sharing the context as is, in plain English would increase the size of the prompt beyond the permissible limit.