Reduce hallucinations in my bot

Hi Everyone,

I am looking for a solution to reduce hallucinations my bot ? How can I do it ?

Best regards
Damian

I don’t want to trivialise the issue of hallucination reduction as it’s important and needs thoughtful attention and effort to be expended to solve.

But… hallucination is a symptom of a lack of information, the model has been trained to give answers. When it is able to use existing data, those answers will be accurate and hallucination free. Therefor we must make every effort to ensure that the AI is answering questions for which there is either ample data in the existing generic training set as provided by OpenAI, or you provide that data in the form of context. The context can be directly included or pulled back from a database or in some cases, fine tuned.

Nice to meet you @Foxalabs. Thanks a lot for answer. Provides context in the system prompt :thinking: Is this a good practice ?

Typically I provide context in the user prompt surrounded by ###{context}### markers and then tell the model to use the information in ###'s as context… You can certainly try in the system prompt, but I have not tried it.

The following example from the OpenAI cookbook might help:

There’s a section about fine tuning to have your bot say “I don’t know” instead of hallucinating when it doesn’t know the answer :laughing:

2 Likes