How can we prevent large language models like GPT-4 from hallucinating?

What are some effective ways to address the issue of hallucinations in large language models like GPT-4, where the model generates responses that are not based on any input or context?

Welcome to the community!

That’s typically fairly straight forward. The newer models are less prone to this, but here are some pointers:

  1. don’t ask a model to do what it can’t do
  2. ensure that the model has enough information to accomplish the task your’re asking of it.
  3. make sure that you limit or eliminate redundant or confusing information contained in the context
  4. try to formulate your query so that it’s easy or trivial to check whether the result is accurate, and then check it.

If you use the models in a “Generative AI” capacity, you’ll likely need to run a fact check pass over your result, but this is just nr.4 with a step in between.

1 Like

Hi - I tried to develop a custom GPT that self-reports hallucinations. Feel free to test it out in your research here & good luck!

1 Like