Hallucination vs Confabulation

Started running this down in the research papers.

At present found this

Survey of Hallucination in Natural Language Generation by Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, Pascale Fung (PDF)

Researchers started referring to such undesirable generation as hallucination [125] 1.

Footnote: 1

The term “hallucination” first appeared in Computer Vision (CV) in Baker and Kanade [5] and carried more positive meanings, such as superresolution [5, 112], image inpainting [48], and image synthesizing [226]. Such hallucination is something we take advantage of rather than avoid in CV. Nevertheless, recent works have started to refer to a specific type of error as “hallucination” in image captioning [13, 159] and object detection [4, 83], which denotes non-existing objects detected or localized incorrectly at their expected position. The latter conception is similar to “hallucination” in NLG.

Feel free to run down references such as [5] and [125]. The footnote is enough for me.


However the Wikipedia article for AI Hallucination does note

In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called confabulation [1] or delusion [2]) is a confident response by an AI that does not seem to be justified by its training data.[3]

3 Likes