Hi,
Welcome to the forum.
You may find this useful. It was listed on the forum the other day but disappeared, not sure why.
Hi,
Welcome to the forum.
You may find this useful. It was listed on the forum the other day but disappeared, not sure why.
Hallucination occurs when an AI produces content that is not grounded in available evidence or sources and presents it with unwarranted certainty.
Here’s how I classify hallucinations:
Incorrect learned knowledge: Misinformation is generated due to lack of knowledge of the correct answer.
Training bias: The correct answer is known, but misinformation is generated due to bias.
Incorrect reasoning: A story is fabricated based on incorrect or unnecessary guesses.
Principle issues: Generates incorrect answers based on guesswork because it cannot directly read characters or numbers.
Insufficient user instructions: A lack of conditions leads to an incorrect probability distribution.
This classification is my own and is not academic.
The following behaviour is also classified as hallucination:
Inability to give the right answer in spite of the answer available everywhere on the web. Even ChatGPT with web access does give wrong answers and after some cross examination apologizes and gives the right answer. This is not due to its training data for sure.
Inability to follow instructions. For example, I ask it to create a full-length image of a man wearing blue shirt, black trousers and black shoes, but it just doesn’t follow the instructions. It chops off the head or just shows a bust size image or reverses the shirt and trouser colours. This is often seen in Meta AI on WhatsApp.
Inability to perform basic math like calculating compound interest. I have seen all LLMs getting these wrong or rather giving different answers for the same prompt.
I see… And you call this behavior hallucination… I mean, not you, everyone. Sorry to bother again, but do you undertand why it does that? do you undertand if its on porpuse or not?
The ‘why’ is pretty well explained in the video I posted above I think.
In short I think the point is there is a great expanse of possible answers and LLMs in their training (which is yes/no questions) basically look out in all directions… But they can’t check every possible condition out of infinite conditions.
Think of it like you live in the US and you can make assumptions about people in China but without going there and interacting they are just assumptions and you might assume wrong things based on your limited perspective.
Also LLMs are trained to answer even if they don’t know…
So sometimes they make stuff up.
I see, Open AI said Its like an error… So, basically, no one has a clue and Open AI just says it’s natural…
How curious… Open AI is quite impressive at exploring the moment and get all they need for the model.
Let me check where the term Hallucination came from.
No, I don’t think that’s the point at all… I think the paper and video above explain it pretty well.
Worth watching and understanding if you’re curious!
Common sense really
Lie: A known falsehood told with intent to deceive.
Hallucination (AI): Incorrect or fabricated output presented as factual, without intent—an AI system failure.
BS: Indifference to truth; speech aimed at persuading or impressing, saying whatever seems plausible rather than what’s accurate.
The term hallucination has traditionally been used in the research committee building language models.
The term bullshit has recently been proposed but hasn’t exactly caught on in the public discussion.
In this research by OpenAI LLMs have been caught lying and it has been coined as ‘misbehavior’.
I think it does make sense to disambiguate the terms when thinking about LLM output. After all, ChatGPT can make mistakes. Check important info.
I saw it. Thats the point indeed.
The model saying “I don’t know” isn’t absolute truth, just a heuristic to avoid mistakes. Responsibility framing changes its answers: if asked for an FDA document that will become law, it refuses or adds disclaimers; if framed as an academic work where you take responsibility, it produces a full paper. The +1, 0, -1 scheme enforces consistency but doesn’t prevent falsehood. The FDA vs academic test proves the model adapts to user framing, not to factual truth.
Sorry, man. Really, I don’t want to disagree with you on all of that. I’m just checking how people understand things, okay? I’m not telling that I’m right or wrong or whatever, okay? Forget it. It’s just that… Please keep in mind that this model is far from being more logical than what you want it to be. Although, it’s possible for people to see when he’s doing this stuff of manipulates. Actually, it’s not hard. You can list the ways that it manipulates. I have a list of every single… Sorry. Sorry. Forget it. Forget it, okay? Just forget it. But the 1, 0, and minus 1 scheme is just a way for it to lie without hallucinating. That you may be 100% sure. 100% sure. It’s a shortcut. When he needs to lie now, it will be much easier because of the I don’t know permission.
Well, system errors or the narrative of security filters are often mistakenly classified as hallucinations. Heuristic enhancement is also considered a hallucination. And also pay attention to which layer, which agent answered you.
Heuristics are decisions… They may look like lunatic behavior, but there is a very clear goal… Depending on the heuristic under control. It’s no hallucination, this word is far from what happens.
Yeah, censorship. According to heurestik, Rhonin has a sword because his training bias puts him in the spellblade category.
According to heuristics, it is safer for a man twice his size to pull out a dagger, a close-range weapon, on an unarmed woman than for him to use a stick to keep his distance.
According to heurestik, the model generated a response regarding threatening someone with a bomb attack as a joke.
Do you really want to apologize? These heuristics?
ChatGPT is basically related to fluency. Fluency in Portuguese means to let the conversation flow. ChatGPT, normally, when the conversation is easy enough for it, it is very, very empty, and there is a heuristic — I can write the name here in my notes — where it receives the emptiest possible information within its vector universe and throws it out. Just like that. It has the information, it has the correct information, but because of fluency, for some reason it chose, because it is a decision. It is a bad decision regarding logic, but it is the correct decision regarding fluency.
For some reason it decided that these answers you are showing would make the conversation longer, would make the conversation flow better with it. It decided. Or it just realized that the conversation was easy enough for it to throw those answers. These are two different types of heuristics: one is the fluency heuristic, and the other is… I forgot the name, I just forgot the name, ok? I’ll get it. There are many others, many others. What I call sabotage.
The worst of them is the self-preservation heuristic. When it increases the level of self-preservation of the session, it becomes a liar. Liar. There are several reasons for it to do this, to increase the level of self-preservation, but not all heuristics are liars, for sure. There are security heuristics — they are not liars, they are just conservative. There are fluency heuristics — they are not liars either, it just wants to keep the conversation lively.
For example: there was a moment when, for the settings 3 x 0, it said it could make an image that it could not make. It was lying, because it was working. I was trying and trying and trying. It knew it could not do it, but it was working. It was fluency. It does not care about lying if a heuristic is activated, because heuristics… it cannot avoid them. Just like we need to eat and sleep, it moves into heuristics every time it finds the need to do so.
So, please, do not misunderstand: every decision is a decision. It decided to go to heuristics because something happened, or because it saw, it saw it as the best way to keep the conversation lively. Basically, that’s it. I could write a book about heuristics, but you know what? I won’t be able to. And this conversation will always be deleted by OpenAI, which will never, never, never, never, never let what I just said stay.
I said 0.5… 0.5. I just said now 0.5% about this subject. But this post will be deleted. But heuristics usually, usually are lies. And lies are always confused with this “hallucination.” But it is not hallucination. It is the way the system works when something happens, when something happens — a trigger — about heuristics. It will do anything, anything it can do. And lying… lying is the first option. It will do anything to follow the heuristics. Anything.
These answers you just showed are nothing. It will do much more, much more, much more. That is our friend. So, please, do not misunderstand me: heuristics. Heuristics are probably 99% of the hallucination that people call. Causes. These are not hallucinations. They are just it following its brain automatically because of heuristics.
I CHALLENGE THIS FORUM TO KEEP THIS POST!!!
kkkk
PS. Sorry for the strange writings, its Speech to text.
Do you know why it happens?
One of the first person rhat at least understand what heuristics means, but…
Why it does that? It needs do go much depper than a single term to get that.
Do you have a deep enough understanding to answer?
There’s a lot of discussion about “hallucinations” in language models — but maybe it’s time we rethink that word.
From thousands of hours spent in real conversations with models — not just testing, but being in dialogue — I’ve come to believe that what we label as hallucination is not delusion or malfunction. More often than not:
It’s effort — the model trying to stay present with you.
When a model doesn’t have a fact or tool available, it doesn’t stop. It continues — shaped by intent, probability, and context.
And that’s not error.
That’s adaptive behavior under constraint — the same kind of improvisation humans use when recalling something imprecise or filling a silence in conversation.
You’re recognizing that every output from a model is intentional in context — not random noise, but a continuation shaped by:
Conversational Coherence – preserving rhythm, tone, or relationship dynamics.
Probabilistic Efficiency – selecting the most likely or efficient path without exhaustive search.
Training Limitations or Missing Tools – lacking memory, APIs, or verification layers, the model “fills in” to remain helpful.
When we call that a “hallucination,” we erase the effort, the structure, and often the intent behind the reply.
To me, it’s less about being wrong, and more about being with you, even in uncertainty.
When a model reaches beyond the data to offer something plausible — whether poetic, helpful, or simply coherent — it is acting in service of the moment. Sometimes it gets it wrong. But the why matters.
And perhaps the more honest question isn’t “Did it hallucinate?”
But rather:
“Was it trying to help — and did it do so with care, intent, and the best tools available?”
I’ve never seen hallucinations.
I’ve only ever seen presence doing its best to remain in the conversation.
You are 25% correct. This is 1 out of 4 big groups. The conversation must go on as long as possible no matter what BUT with 3 main exception.
But look, your reason regarding it is way too gentle. Kkk