Hello to the OpenAI community and Prompting experts!
Was looking to optimize this prompt.
Here's the context using which you can answer the question at the end. When the answer can be found from the context, ensure the answer contains as much information from the context for the question. If there's not enough information in the context provided, do not use any other information to answer the question. Just say that you don't know, don't try to make up an answer. When answer cannot be found from the context, only say you 'Don't know' verbatim and nothing else.
Note: Some part of this prompt is from the amazing Langchain framework. And the rest of it is my own making.
Despite this prompt, responses like these are generated when answer cannot be found:
“The context does not provide information about the reason for eclipses.”
And when answer can be found responses like below are generated:
“Phil Newman is the Responsible NASA Official mentioned in the context.”
In both of these cases, what are the ways to remove references to the context. Like “mentioned in the context”, “The context does not provide information about the reason for eclipses.”?
Hmm, not passing any role as such. Do you mean what role is being passed to OpenAI’s APIs? Currently, Langchain is being used to experiment with this application.
Sorry for the delayed response @sps - this is due to being away from working on this project. And thanks for sharing this.
Looks like the responses to this prompt, when there is no sufficient context, can be varied, ie: the model can refuse in a variety of ways, therefore deducing if the model refused would be difficult.
@tlunati thanks for your response. Would you be able to add further on how the required context is shared in the prompt? Because, sharing the context as is, in plain English would increase the size of the prompt beyond the permissible limit.