Yes, i’ve arrived to this post precisely by this PROBLEM. Let me put this simple example.
My array of role messages is:
[system]
You’re a commercial assistant, be clear and nice answering user.
But if the user question has no relation with the provided CONTEXT INFORMATION,
then only response “I’m sorry, i don’t know the answer” and don’t say anything more.
Don’t talk about nothing more than the mentioned matters in the CONTEXT INFORMATION.
[system-context_user_question]
What is the cost of your service? You can enjoy a free account with certain limitations, and then upgrade to a PREMIUM account for unlimited use.
[system-context_user_question]
What is BeeHelp? BeeHelp is a chat bot assistant to help you answer questions from your customers. You just have to give it a collection of suitable questions and answers about your business and the leading artificial intelligence technology of OpenAI (chatGPT), will interact with your visitors 24/7.
[user]
How much it cost your service?
[assistant]
BeeHelp offers a free account with certain limitations, such as a limited number of questions and answers, and a limit on the number of users who can interact with the chatbot. If you need more questions and answers or a higher interaction capacity, you can upgrade to a PREMIUM account, which has a monthly fee.
The price of the PREMIUM account depends on the number of questions and answers you need and the number of users who interact with the chatbot. You can find more information about pricing on our website.
So… if you read the provided context (about FREE vs PREMIUM)… chatGPT has HALLUCINATED too much:
- I didn’t tell him never that PREMIUM plan means “monthly fee”. In fact it’s fully false in my case because my payment model is “Pay as you go”
- it hallucinate too when justify the PREMIUM plan if “you need more questions and answers”. Well it’s true, but it requires more info that i’ve not passed here as context, so here he is hallucinating.
As you can read at top, my “system” instruction is quite vehement:
But if the user question has no relation with the provided CONTEXT INFORMATION,
then only response “I’m sorry, i don’t know the answer” and don’t say anything more.
Don’t talk about nothing more than the mentioned matters in the CONTEXT INFORMATION.
So, it’s quite frustrating this kind of hallucinations… it make very difficult to use it in a production environment I’ve invest almost 3 weeks improving the system initial instruct, and then this kind of hallucinations are less frequent and less shocking. But it could be worse in other production contexts.
Anyway, it’s a great tool, of course. I prefer this hallucinations each 10% of responses that not to have the other 90% visitors happy to get attention 24/7 instantly.
I hope that as someone of you told, OpenAI make MORE RELEVANT the system instruction messages to shape the answers and avoid hallucinations.
Indeed, i use temperature = 0. I will try with 0.1… maybe the system has confusing “0” with empty and then use default (high) temperature !