Building Hallucination Resistant Prompts

I have a technique I’ve been working on called “Hallucination Dodging” and it seems to be working well for both Q&A Scenarios where you’re asking questions over a closed corpus of data, and LangChain style planning tasks where you need to predict a list of tools/functions to call. The key is actually to just let the model go ahead and hallucination but then make it release it just hallucinated. It will then happily “dodge” the hallucination in it’s final response… All of this is built off my Better Chain of Thought (CoT) pattern.

I am finding there are some hallucinations I just can’t dodge. So far these hallucinations have been cropping up in complex logic problems I’ve been asking GPT to solve. Sometimes the model will hallucinate a conclusion that’s simply incorrect and even when I confront the model about the hallucination it denies it. It just can’t see it…

2 Likes