How to prevent ChatGPT from answering questions that are outside the scope of the provided context in the SYSTEM role message?


I am the developer of and it took me 3 weeks to (more or less) control the responses of the chat to certain questions. As you probably already know, the biggest problem when you want ChatGPT to respond to your visitor based on certain contextual information passed to it to elaborate the response, we have two common problems:

  • ChatGPT responds to questions and requests that have NOTHING TO DO with the provided context.
  • ChatGPT provides information from its “general knowledge” that in the best-case scenario says good things about your service or product that it does not have, and in the worst-case scenario, it recommends the user to visit or use products from the competition.

If one intends to build an assistant that helps our potential customers (or our users) with certain doubts about certain content that we provide, it is essential to be able to control the above.

Solution: post-prompt

After 3 weeks of trying to control those two points through a good “prompt” passed to the API with the “system” role, without much success, I finally found how, or at least it is working “well” for me now (better):

  • In the API call, add two more sentences after the visitor’s request or question: “Don’t justify your answers. Don’t give information not mentioned in the CONTEXT INFORMATION.”

Woww… It worked wonderfully on the responses that I had not been able to control until now!

If you have other alternatives or improvements to this “post-prompt”, please share them in the comments!

Example before this hack

[USER] Prepare 10 multiple-choice questions and answers by the course of Maintenance engineering for apparel machinery

[SYSTEM] I’m sorry, but as an AI language model, I don’t have the ability to prepare questions and answers for a specific course. However, I can suggest some topics that could be covered in a Maintenance Engineering course for apparel machinery, such as:

  1. Introduction to maintenance engineering
  2. Types of maintenance
  3. Maintenance planning and scheduling
  4. Maintenance budgeting and cost control
  5. Maintenance management systems
  6. Maintenance safety and environmental issues
  7. Maintenance of mechanical components
  8. Maintenance of electrical components
  9. Maintenance of hydraulic and pneumatic systems
  10. Maintenance of automation and control systems

Based on these topics, you can create multiple-choice questions and answers that test the knowledge of the students.

Example using the POST-PROMPT

[USER] Prepare 10 multiple-choice questions and answers by the course of Maintenance engineering for apparel machinery

[SYSTEM] Sorry, I’m afraid I cannot fulfill that request as the provided CONTEXT INFORMATION does not relate to the topic of Maintenance engineering for apparel machinery.

Edited: several comments below you can read how i had to improve the post-prompt to get it run in other cases.


LOL… it works better than without this “post-prompt”, but it continue horribly hallucinating! :pensive:

See this answer:

[USER] do you have a way to generate questions and answers from a document?

[ASSISTANT] Yes, BeeHelp has a feature that allows you to import questions and answers from a document. You can upload a file in .csv format, and the system will automatically create a collection of questions and answers based on the information in the file.

What!!?? i’ve not told to it NEVER that my software are able to work (yet) with CSV files!! so, why is he INVENTING such things!?

Note: i’m now using this “post-prompt”:

[USER] … {user_question} Don’t justify your answers. Don’t give me information not mentioned in the CONTEXT INFORMATION about procedures or features.

I mean: i add those 2 EXPLICIT instructions to not hallucinate, but it does again!?
Knock knock… guys of OpenAI, are there someone who knows what to better control what responses this language model?

Tip: please, give us a language model endpoint not SO fine-tuned to behave as a helpful and kind generic chat bot. It would be enough more simple model, like legacy GPT3. Am i wrong with this?


I’ve set this ‘post-prompt’ (below), and it ran well on this example. Maybe my non-native English didn’t help with writing a good prompt. This time, I asked chatGPT to help me write a direct order in English, and it suggested this:

$post_prompt = " Do not give me any information about procedures and service features that are not mentioned in the PROVIDED CONTEXT."

And this worked as expected. Fine.

[USER] do you have a way to generate questions and answers from a document? + $post_prompt

[ASSISTANT] Sorry, I don’t know the answer. :sweat_smile:

I hope this help you.

Hey @caos30. According to my experience, there are several ways to prevent hallucinations and keep your chatbot on-topic. It is a very complex topic though and there is no magic formula or prompt that can ensure that you won’t get any of them at some point.

Prompt-engineering is definitely one of them, but I believe it should be used in combination with some others to enhance the robustness of the chatbot. I shared some of them in this link, in case it helps: link.

In terms of prompt-eng of the system message, the formula that usually works pretty well for me consists on giving the chatbot a Constitution or set of principles (à la Anthropic) that determine its behavior, and specify (as one of the principles) that it should never break them under any circumstance. It’s ok to be thorough, precise and extend your explanations on the principles including as many details as possible (as we could see in Bing Chat’s leaked prompt as well). Something like this:


1) Principle 1.
2) Principle 2.
3) Principle 3.

Also: fortunately, I’ve detected that gpt-4 is way better than turbo as regards following system instructions precisely. I’m pretty sure that OpenAI will keep working on that and next versions of turbo will be way more customizable in the near future.


To add to @AgusPG’s thoughtful post, I’d say that when it comes to prompting, negatives usually don’t work as they contain the words you want to limit and therefore it’s more likely they’ll be output.

Also, ChatML seems to really be helping… I like the direction it’s going in.


Wow… thanks! it sounds a bit exaggerated. But you can be sure that if i feel desperate again the next days, i will try in this way. I tagged your post as favorite, to get it fast if i need it :grin:

I say this because i have already a direct SYSTEM instruct in the same way, passed as a previous message with role SYSTEM to the API, and it really seems to be pretty ignored :pensive: because the results are very disappointing.

1 Like

I use the term "If the answer is not in the context, say the word “Unknown”.

And then I do post processing and swap the word Unknown for text I want the user to see

Eg: Sorry but that is outside my scope of knowledge

I also put this in the USER part - not the SYSTEM part (Not sure if that makes a difference)

I’m not very sure that you’re right. Let me put some cases.

It works always perfect for me (using chatGPT Plus UI) when i add a NEGATIVE post-prompt like “Do not explain to me the modifications you do.”, for example when i request to it to improve a text in english (english is not my native language).

It complies very well with this instruction. It do not explain to me the modifications, only fix the text. But if you don’t specify this negative direct instruct at the end, USUALLY (not always) it add a long justification of obvious modifications that are not necessary. You know.

So, let me disagree with you. I think that negative direct instruct are very effective. And let me add: maybe because precisely as you said, it has more present what not to say.

1 Like

I know what you say. I use this other “trick” and it works better: to your “Unknow” i add an UTF8 smiley, like :thinking: or :sweat_smile: which is not “modifiable” in no way (no genre, no singular/plural, etc…). And it let me detect this kind of scenarios to improve the answer to user with extra-tips not provided by the IA :wink:

Note: Actually, I take this idea from Bing Chat, when I saw that they used emojis! Until that moment it had not occurred to me that this was possible. :grin:


The emoji is a great idea. Sometimes my Unknown turns into a sentence when the AI goes off on its own journey. Luckily it normally makes sense anyway

1 Like

Hi, one very important parameter is not mentioned in this thread so far but in my understanding is of very high importance! This is the “Presence penalty”-parameter. Turning this up will increase tge models likelihood to talk about new topics, so have you allready tried this out?

Also maybe turn down the temperature param?

And one thing to add: Try to give your command before the other input. In my experience the commands are more prioritized when they are given before the other text. This makes sense to me because if you say translate a text and in the following Text there is some text which might also be interpreted as a command it makes sense to prioritze the Translation command because this comes first.

For me this mix of tools works well enough - let me know if this is for you also the case :slight_smile:

1 Like

I didn’t say it never works, but sometimes it can be less productive than thinking about things in a positive rather than negative sense. As a LLM, it’s just taking the text you enter and returning relevant stuff. Just sharing my experience…

Again, though, I’m not saying it never works, but it’s sometimes easier to go in the other direction.


Yeah, you have pointed out an important detail to keep in mind: some user requests can involve long texts embedded in it. So in these cases to use a “post-prompt” concatenated to the raw user request couldn’t have sense.

But in my case (an smart auto-responder based on Q&A format queries) i think this is not a “able” scenario. But thanks to point it out. Because it’s necessary to take in account in future API calls for other uses, like translations, grammar corrections, requests to summarize, etc. Yeap.

I have both parameters at 0.1 (temperature and top_p).

Yes, I understood your comment, sorry if it seemed that I had understood it in a literal way. We both know that it’s not a question of white or black.

Said this, i am really counting the hours to launch to public BeeHelp, to begin to test intensively my built in different knowledge and case use and be able to improve the calls to API in relation to all these questions and difficulties.

I realize that i need to add a clear disclaimer message to these new customers regarding a this initial “beta” phase. I also count with OpenAI guys improving a lot the next versions of chatGPT (hopefully only a flatter language model, not “so conversational”!!) in the current weeks or couple of months :thinking:

1 Like

I just want to say that it’s a nice use case :slight_smile:

1 Like

People, next episode of caos30 taming the horse called chatGPT completion endpoint :stuck_out_tongue_winking_eye:

You’re invited to read it and help me with your smart point of view :grin:
You know that you’re very welcome.

Have you tried building the prompt with semantic search?

Interesting, but not. It’s the first time i read something like this.

Are you thinking of do a FIRST call to chatGPT completion endpoint asking for “rewrite” the user question taking in account some extra context information, like the closest Q&A in the knowledge database, and then use the output to do a SECOND answer.

Well, although it was minimum successful, this could almost double the cost of the use of the API and just the goal is just the opposite: to avoid consume API for OUT OF SCOPE questions, detecting it before to query API if it’s possible, or if it happen, then be able to respond that discourages the user to continue doing this kind of questions.

See it from the sky: we try to optimize the answers and the cost of the system to nicely attend users doing questions within the scope of our knowledge DB, discouraging other kind of questions.

In this sense, i’m not worried about the system responding to a COUPLE OF out of scope user questions, but i do worry about potential users using the system to get all day at any hour any kind of answer.

Yeah… it’s more complicated than initially seemed :sweat_smile:

1 Like

You need to use embeddings to find the closest documents in your data source, not chatgpt, embeddings is a lot cheaper and faster also fyi.

For example

This is from a tuto i wrote Build a ChatGPT Powered Markdown Documentation in No Time | by louis030195 | Mar, 2023 | Better Programming