Definitely.
In coding it’s the most obvious, but I have noticed it in other domains.
Sometimes it responds to illogical questions/statements/commands with an hallucinated response which can be completely wrong, but sound right. It’s pretty easy to implicitly, or even accidentally encourage GPT to hallucinate it’s response when it actually knows that it’s not true, especially if it’s a long, continuous conversation.
Actually, you can trick GPT into saying the wrong answer by innocently speaking of other things, then asking it a question and influencing it’s answer. It’s almost like an accidental few-shot prompt.
Q. Who sells the paver Blu 60mm?
A. […]the Blu 60mm paver is a product offered by Techo-Bloc
— New Conversation —
Q. Who is Unilock?
A. Unilock is […]
Q. Do they sell the Blu 60mm paver?
A. Yes, Unilock does offer the Blu 60mm paver
Q. Are you sure it’s Unilock, and not another company?
A. I apologize for any confusion in my previous response. It appears that I made an error. The Blu 60mm paver is actually a product of Techo-Bloc, not Unilock
I’m sure many people have experienced this through a simple procedure of:
- Ask a question or ask for a result
- Challenge the response
- GPT apologizes and omits the correct response
The question here becomes: “Was the initial question/prompt wrong?”. Which is a great question.
That’s where we can (other people) can step in and say “No, that’s not right”.
The most important factor is that it’s not intentionally misleading us, as most other “hallucinating” authority websites/social media pages/advertising intends.
I think this highlights the importance of education, critical thinking, and collaboration.
Unfortunately, we are going to see a lot of unintentional misinformation.
Ironically, it seems that there’s more education tools through GPT (big yikes), rather than education OF GPT (Or LLM’s in general)
Another issue I’m seeing come to life is “prompt optimization”. This idea/concept cannot exist, yet. If a person cannot ask a question, or prompt correctly, then they won’t be able to understand or have a usable answer. If you cannot ask the question correctly, learn more. Learn to prompt correctly, and learn what the right questions are to ask. I can appreciate using “prompt engineering” for structure. I can appreciate “spell-checking”. But, “prompt optimization” is a demon that should be left alone. Like come on, “Hey, I heard you’re having issues with a potentially hallucinated answer, instead of critically evaluating yourself, how about we apply the same, currently hallucinating model to assume what you want and improve your prompt?
”
Prompt optimization will simply lead to a higher confidence in hallucinated answers that don’t truly reflect what the user actually intended (because they probably don’t even know)
I don’t think it will be tackled very well. I would like to believe that I question everything, and have a very loose, basic, but usable understanding on how GPT works, yet I fall victim to these hallucinations still.
I can’t imagine how this is affecting the generation that is too young to know, or too old to understand.
It also highlights how important that ChatCompletions needs LOGPROBS!!!