Hello everyone,
I wanted to share an experience I recently had while interacting with AI that underscores the crucial issue of cognitive bias in information dissemination.
I was watching a video from the YouTube channel Kurzgesagt, where they were discussing their mistake about the total length of blood vessels in the human body.
–
Context:
Kurzgesagt used the 100,000km figure in a number of videos, always citing a source. After some time, they found that even though several articles and even papers used this number, they either pointed to another source that just used the 100,000km but never explained where it came from, or never even cited a source for the statement.
So they started digging deeper to find the original source.
To cut this short:
After a year or so, they found the original and discovered that it was from 1920 and therefore severely outdated.
They and several others just assumed the information was correct and used it as fact. This is a prime example of cognitive bias.
idk why but I cant link my source so you just have to search on youtube for:
We Fell For The Oldest Lie On The Internet - Kurzgesagt
and look in to the description for the sources.
sorry
It made me curious to see if ChatGPT would fall into the same trap.
I asked ChatGTP for the total length of blood vessels, and it returned the widely quoted figure of about 100,000 kilometers.
Please try and test this for yourself, maybe this was just the answer I got.
Even though this was just a fun fact in a video, it shows how quickly something like this can mislead humans and now AI.
This becomes problematic when we use this kind of false information to do something where accuracy is not only important but also a foundation and imagine building something on a porous foundation…
I know this is something openAi is probably aware of, but I still wanted to share this.
Thank you for taking the time to read this and I look forward to further discussions on how we can improve the quality of information provided by AI.
Best Regards