Gpt-4o hallucinates a lot

I feel the same way.
Instead of saying “I don’t know” or “I have no information” about something they don’t understand (for example, when asked about a somewhat insane Japanese doujinshi manga), they continue to produce false hallucinations based solely on the wordiness of the work’s title.
This type of hallucination is indeed a terrible hallucination that I have not seen much of since GPT-4.
Coupled with the fluency of the language, I would not be able to recognize this hallucination if it were output without knowing the correct information.

1 Like