How to prevent ChatGPT from answering questions that are outside the scope of the provided context in the SYSTEM role message?

Yes Louis, i’m using it already, since the beginning of this project.
But it is not enough when:

  1. Your knowledge repository is enough “semantic wide”, so when ALWAYS there is something “enough” close (using embeddigns) to the user question.

  2. When the user request is semantically very close to something on your knowledge but it really the real question of the user is far away in “some respect” to your core knowledge. Example: you have knowledge about your premium plan for your service, and the user request: give me a plan to build a premium service for my business.

In these scenarios the embeddings doesn’t avoid that chatGPT completion endpoint give a so good and long elaborated answer as your user request to it. Do you understand me?

Final thougths: i think that maybe you need to see this new “user assistance technology” just like we understand the human attention we receive in a classical physical commerce. I means: you cannot do many thinks (or nothing?) to avoid that entry in your commerce a guy asking foolishness and wasting the time of your friendly custom support guys. Can you?

So, in the same sense, we cannot avoid always that anyone could ask to chat assistant things out of scope… from time to time :sweat_smile:

Good addition I’ll probably add it in. I wish there was a way to make it stop saying “in conclusion” I’ve tried negative and I’ve tried all caps. When I did that it gave me this line “In conclusion (just kidding I won’t use that term again”. I was like… Look at this AI. It’s got jokes. It actually was very irritating. I haven’t tried Chat 4 because having so little prompts to use makes it just not something I want to try, because if it’s better and I like it. I’m limited (even though I’m paying for it). So, not useful. But there’s so many times it uses tired phrases and just keeps doing it no matter what.

Although, it also has that bad habit of hallucinating no matter what I tell it. I ask it not to add something and it’ll rephrase and add it anyway. It’s frustrating. (Makes you want to run it locally pulling from it’s own pile of things it can and can’t talk about.) :smiley:

Maybe that feature it suggested should be something you can do for an upgraded membership package. It’s a good idea. Just saying…

Some really good suggestions by @AgusPG in his link – we’ve tried a bunch of those with some excellent results.

So basically, try your best to control these parameters

  1. Prompt engineering
  2. Context
  3. Post-response filtering/similarity.

In our case, our system (CustomGPT) is used mostly for such use cases with customer data and they want the responses to be based on that data (with no hallucinating!) – e.g Customer Service Livechat.

1 Like