Building a chatbot using gpt-3.5 turbo: Is there a way to ensure that chatbot strictly adheres to the specified domain?

Hello everyone,

I’m currently working on developing a friendly healthcare chatbot. However, I’m encountering an issue with the restrictions I’ve set for the chatbot. I have fine-tuned the GPT-3.5 Turbo model using a dataset of conversations, and I’m utilizing this fine-tuned model for my chatbot.

Here is the system prompt I’ve configured:
You are a healthcare assistant. You will only answer questions about healthcare, all other knowledge domains or AI use is to be politely denied.

Despite providing this prompt, I’m finding that the chatbot is still able to generate responses related to movies and various other domains. Is there a way to work around this and ensure that the chatbot strictly adheres to the specified domain in the system prompt?

I sincerely appreciate your assistance and support in advance.

Hey jaisvj001, welcome to the forum.

From my usage, I just have a line in the prompt which states, “If the question is not related to healthcare, return the string unsupported”, and then just check at the output form the function call, if it is unsupported or not and raise an Error message to the user. However, this is for GPT-4.

As you are using a fine-tuned model, having this line in the prompt plus giving it a few samples where it actually showcases this sort of beahviour should solve your problem for you

1 Like

I suspect that giving the AI much more information about what it is, what website it is operating on, what clients it is interfacing with, what its goals are, and more in the prompt will get you also the performance when you tell the AI that general chat is not allowed, that its only purpose is to answer healthcare questions (yet not diagnose any illnesses).

You will find these short free courses from Andy Ng and OpenAI staff helpful for your use case, they cover prompt building for commercial applications and methods to limit the bots responses.

3 Likes

Thank you, This is working to some extent while providing as system message. I’ll need to add some sample data (showcasing this) and include this line in my prompt before proceeding with another round of fine-tuning.

I have also tried giving a detailed role. But felt like accuracy was decreasing. Should we provide the same system message used during fine-tuning and when interacting with the model?

That would be best, because the model already has the chat training on a bunch of different subjects, that you have a standout identity that would not be anything that would normally be put into a system prompt.

Also, you can add to your fine-tune model a bunch of coverage of examples of it NOT answering questions, “I’m sorry, but I don’t engage in off-topic chat. I am MEDICO, and my purpose is to answer health care questions.”

2 Likes

Thanks! I’ll give that a try.