Higher Frequency of Hallucinations?

Has anyone noticed escalated levels of hallucinations lately? I’m using the assistants-api and only saw one in months of usage. Now, in the last 2 days, I estimate one in every 7 responses are hallucinations.

If not, is there a good resource or article on reducing or mitigating hallucinations? I understand higher temperatures seem to be problematic, but lack other ideas.

Well there are several ways to reduce AI hallucinations, it mainly depends on what you are trying to generate, none of them are perfect but they do significantly reduce AI hallucinations. I would suggest RAG to stop the hallucinations.

1 Like

Since the assistants-api uses embeddings and vector stores, using attachments implies that you are utilizing Built-in RAG.

You mentioned an increase in hallucinations; have there been any changes in the content you’re handling with the assistant API? When dealing with specific terms or names, there are cases where the embedding model may not be able to vectorize the data correctly.

It might also be helpful to try using a different model.

In any case, providing specific examples will help other members suggest solutions!