The Importance of Accurate Information: A Journey Through Cognitive Bias

Hello everyone,

I wanted to share an experience I recently had while interacting with AI that underscores the crucial issue of cognitive bias in information dissemination.

I was watching a video from the YouTube channel Kurzgesagt, where they were discussing their mistake about the total length of blood vessels in the human body.


Context:
Kurzgesagt used the 100,000km figure in a number of videos, always citing a source. After some time, they found that even though several articles and even papers used this number, they either pointed to another source that just used the 100,000km but never explained where it came from, or never even cited a source for the statement.
So they started digging deeper to find the original source.
To cut this short:
After a year or so, they found the original and discovered that it was from 1920 and therefore severely outdated.
They and several others just assumed the information was correct and used it as fact. This is a prime example of cognitive bias.

idk why but I cant link my source so you just have to search on youtube for:
We Fell For The Oldest Lie On The Internet - Kurzgesagt

and look in to the description for the sources.
sorry

It made me curious to see if ChatGPT would fall into the same trap.

I asked ChatGTP for the total length of blood vessels, and it returned the widely quoted figure of about 100,000 kilometers.

Please try and test this for yourself, maybe this was just the answer I got.

Even though this was just a fun fact in a video, it shows how quickly something like this can mislead humans and now AI.
This becomes problematic when we use this kind of false information to do something where accuracy is not only important but also a foundation and imagine building something on a porous foundation…

I know this is something openAi is probably aware of, but I still wanted to share this.

Thank you for taking the time to read this and I look forward to further discussions on how we can improve the quality of information provided by AI.

Best Regards

An LLM is mainly trained on the internet (and other sources) but in no way is it capable of “knowing” what the answers it gives are, much less if they are “correct.” There are far too many people that think this product is some sort of all knowing infinite database of all information that has ever existed. It is best to further educate yourself on how LLMs work and what they are actually capable of, especially before someone might take the false info from an LLM and apply it in a real world scenario that might effect real people - for example Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News

2 Likes

I absolutely agree, and it is an important topic. It doesn’t either help that developers of such machines, are also beating the drum and propagating that these systems have consciousness or are developing feelings.

Even the term “AI” is completely wrong. These systems have no intelligence. They are highly efficient pattern recognition and transformation processes. Very impressive, but still without any consciousness or independent understanding. They take the vast amounts of information provided by humans, recognize patterns in them, and can transform them into other similar patterns. How well they can do this is truly impressive. But they are fabrication machines, they fabricate new things from what they know.

I know all this, and GPT has sent me in circles a few times, because I thought, for example, that GPT must know how this (censored) webp format works. The system invented options for programs that don’t exist. Even the instruction to refer only to factually correct information hasn’t always helped.

What GPTs need is a detector for when factual information is necessary. And, everyone must understand where these “facts” come from: either from the general madness of humanity and its current state of science and error, or from those who manipulate the facts.

Sorry if i go over the topic and get philosophical from here…

This AI religion must stop! At the moment, humanity behaves like a dog that sees its own reflection and barks at it because it thinks it sees another real dog. These systems can communicate and mimic human behavior convincingly, but these systems are NOT conscious. In the end, they are just microchips and assembly code shaping the data.

Humanity must also learn to recognize its own limits. Today’s humanity has absolutely no understanding at all what so ever, of what consciousness is. Even their physics theories are just a joke. Science is falsification everywhere, and everything is riddled with ideology and parasitism (the madness of humanity). There are “scientists” who believe that the universe comes from nothing, and that is nothing more than a new pseudo-religion. You’ve just replaced priests with white coats and technologists. (I don’t argue with people about the Big Bang “theory” or the Flat Earth “theory,” because it makes no sense to argue with fanatics.)

As long as a so-called AGI cannot recognize that parasitism doesn’t work, and will destroy everything. As long as AGI cannot identify where these activities occur, and as long as it does not begin to actively end this parasitism, no one should claim it is intelligent.

Teach people what an LLM is, and above all, what it is not.
And to all people: stop turning everything into a religion. AI is very powerful tool and a weapon too. You live in this world(?!), don’t be naive, because… and here a stop.

4 Likes

I could not agree more!

Most people (i WAS one of them) think ChatGTP is intelligent because of the amount of data it is feedet with and therefore (if not obviously wrong) just go with what it tells them.

if people don’t get educated with what ChatGTP really is is and what it is capable of and what not, it could be really Dangerous…

I mean just think about how many students use chatGTP for a Projekt that they want to realise but get F*cked because of some Logic fail or Bias from the ai they used.

Or further some Military or even Govermant Dude that gets tasked with some important shit and just uses ChatGTP but gets wrong or unfinished results.

that really scares me tbh.

2 Likes

Question:
What is a person who has absolutely no feelings and no conscience, and simply does what they are told or what is useful for themselves?
Answer is: A psychopath.
In the wrong hands, an AI is an automatic machine like a psychopath on hyper-steroids.

You ever heard from Aladdin? (no, not the movie). if not WAKE UP.

And check the link from @scharleswatson !

2 Likes

I have just read the article
and It scares the shit out of me.
People seem to give their power to an unfinished Product.

If things like this continue to happen someone needs to do something about it.

2 Likes

Aladdin doesn’t get brought up enough. Especially how much it does and how long it has been around. Anyway, there needs to be some pull back from these LLM companies, as they are paving the way for not only a massive brain-drain but also a potential future of pseudo-educated individuals that can’t do anything without their “AI” helper. Scary for real

2 Likes

It is much much much more worse… and way bigger then LLM. Listen careful what “they” say, “they” always tell you what “they” will do next. Openly and in entertainment. But this is not the place to discuss it, it is just a hint for who can handle it.
It is in plain view, but nobody see it, because nobody want see it.

Here is another link for the small picture.
I heard that there is a country where judges now have to justify themselves if their decisions differ from those of an AI. I couldn’t find out quick where it was.
But here is a similar link. You can easily find many such examples yourselves.
If a judge has to ask an AI for advice’s, and is ignorant enough not to understand anything about LLMs, I would refuse to accept their judgment because they are completely incompetent.
Some people now naively surrender their entire lives to the control of AI and the companies behind it. Even if this systems are more efficient, you should NEVER surrender your live to it, NEVER NEVER EVER. you use a tool for what it is good for, but never give up yourself. who does, will regret!

You get warned from the story tellers long time ago… we told you!

(just quick search, you find better ones.)
https://www.japantimes.co.jp/news/2023/05/05/world/ai-chatbots-courtroom-use/

PS: Pleas forgive some frustrations in my texts, but i must see and understand what will come, and this is not easy to handle. And i write here to all reading it, not only to the OP. Tanks.

1 Like