Is there any value in reporting factual errors I have found in dialog with ChatGPT?

I’d like to help in improving the AI. I wouldn’t mind access to version 4 in doing so.

In dialog with chatgpt I have encountered a number of factual errors - the sort of stuff that wiki gets right. Dates and authors, etc.

Is this expected? Or is there some value in reporting these to the devs?

You can always hit the downvote/upvote button if using ChatGPT.

1 Like

Welcome to the forum!


See AI hallucination.

1 Like

Hah! I didn’t even notice those … ok that’s perfect. I’ll use that.

Interesting. I had assumed these were more complex “hallucinations” that occurred over a period of a prolonged interaction not a simple error of factual information. But I see from the wiki link that these occur more frequently. I think hallucinations gives them to much cognitive content. I mean some are just mistakes. I guess in some techncial sense a hallucination is a misperception of factual reality - I think there is a pretty clear logical disinction between a hallucination and an error. If I ask chatgpt what “2+2=” and it says 5 I’m not sure I’d call it a hallucination. But it seems a waste to debate the terminology.

Honestly, I think if more people were aware of these errors some of the enthusiasm for this version of chatgpt would be allayed.

I think the problem is that the errors can be quite subtle and believable. I feel I am merely discovering what everyone already knows.