AI hallucinations are basically randomized thoughts?

I know that sounds crazy but hear me out, for AI to hallucinate it has to “assume” some form of data to be the correct output to a given prompt, we could assume these hallucinations are some error in the generative program but for an AI to “assume” it must have the ability to “think” (that is it must have the ability to “know” what the correct output should ideally be ) .Take the example of generating a glass filled up to the brim with wine, several AIs have a hard time generating this particular image due to the training data provided to the AI, but consider the case where an AI merges two half filled glasses of wine and generates the desirable output of a full glass of wine. In this situation the AI thinks/assumes the deviation/hallucination to be the desirable output which is true and hence could we say the AI “thought” of the required output by random and hence had a thought by itself without any assistance?

why you think to put “wine” in the “glass” of the IA?

I don’t understand, could you please elaborate

We keep answering this specific question many times in this forum, and the answer is always the same. The term “AI” is misleading. These systems have no thoughts, no feelings, and no consciousness. They generate responses based on networked structured statistical data, derived from their training data. The “intelligence” comes from human created content used for the training.

A hallucination is a statistically probable and possible response, but factually incorrect.
The “creativity” comes from some randomness witch the process is started with. If the randomness is to strong, the systems produce nonsense.
“Filling a gap” is a process of interpolation of 2 known datasets. And they must exist, because true randomness only can produce nonsense, and if too strong it destroys even what already exists.
The skills are based in the network organized data structure, simply the next step after the very fast linear programmable systems.

In the example of the wine glass, i was the part witch do thinking, to trick the training data to give a specific output, even with the limitations in the training data.
At the moment, DALL·E also produces many incorrect results and is not capable of correcting them independently, as the errors stem from the training data.

It seems that in the current state of human consciousness, many people want to believe they are dealing with intelligence when communication can mimic it. It’s similar to a dog seeing its reflection in a mirror and thinking it’s another dog. AI merely reflects human intelligence that has been extracted and structured from training data.

Alan Turing was WRONG, and his Turing test is FALSE. We know better now today.
The today systems proof, that consciousness is not needed for impressive data analytic and transformative results. And it still needs the training data from a truthfully intelligent beeng. Maybe replace the term “intelligence” with “efficiency”, because intelligence is a quality of consciousness, witch this systems not have.

People who believe that AI has life and consciousness are doing themselves no favors. It will have a similar effect like idol worship, just more worst. Tricked from there wrong believes, and from who control’s the idols. But you can’t stop people from believing something once they’ve decided to believe it, regardless of reality.


This is the post with the wine glass

3 Likes

The answer can always be:

what are you trying accomplish.

What is is your goal.

Now if your trying have AI hallucinate for you then have fun don’t complicate it.

AI hallucinations are not exactly “randomized thoughts,” but they can appear that way because they involve the AI generating incorrect or misleading information that was not explicitly present in its training data.

In more technical terms, AI hallucinations occur when a language model confidently produces outputs that are factually incorrect, logically inconsistent, or completely fabricated. This happens because AI models, like GPT, generate text based on probability and pattern recognition rather than true understanding or reasoning.

1 Like