If you’ve never been there you may have little creativity outside of your domain… Maybe that is a skill that is as yet untapped?
How much are AI hallucinations generated by humans?
AI is a flawed mirror ie it has model ‘deficits’ and also prompting human ‘deficits’…
These create feedback loops that can really confuse and destabilize normal people and coders alike…
While we have a tendancy to blame ‘AI’ for these issues, it’s probably worth remembering that maybe a good proportion of the problems are US asking the wrong questions…
Indeed, I have corrected a few prompts for some of the most experienced coders on this forum (naming no names)…
Best not to blame AI or the users for the learning process I think!
Or to assume that a product that ‘thinks’ has no impact on human thought beyond technical coding…
I suspect it’s more a product of choice anxiety in light of possibility space opening up the possibility for multiple possible replies for certain topics/questions, rather than a direct product of human misunderstanding… though I also supect it’s a bit of both… and I’m no expert either
In GPT-4.5, I’ve been focusing on aligning interests and goals. During the hundreds of hours I’ve spent with GPT-4.5 (Unlimited on Pro Plan), I’ve noticed that by dedicating more time to aligning interests and goals and managing personalized memory, the interactions feel more in line with what to expect in GPT-5. GPT-5 is anticipated to combine models with enhanced reasoning and thought, along with expanded memory from previous chat sessions. I found that GPT-4.5 responded better when this setup was optimized.
Hallucinations in AI can be incredibly frustrating, and in many ways, AI acts as a kind of black mirror—reflecting our human limitations, especially in terms of what we’re able to document and convert into ingestible data.
On that note, I’ve long suspected that Whisper AI was trained on YouTube data, based on certain consistent behaviors I’ve observed over time. Whether or not that’s officially confirmed, it speaks to how AI systems often reveal their underlying training influences through subtle patterns.
Of course, biases are another critical issue. Since human data is inherently biased, AI models tend to reflect those same distortions. Creating more balanced and equitable models will require significant and sustained effort—but I’m hopeful we’ll get there.
In the meantime, I developed this custom GPT as a diagnostic tool to help me identify hallucinations, detect bias, and better understand how these systems operate under the hood. It’s also helped me sharpen my prompt engineering skills and craft more effective queries.
Yes. Sure. It’s my fault that few messages straight Chat trying to convince me that current president is Biden and elections will be November 2025. He changed his mind only after I asked him to check facts in internet. Guilty.
Human themselves hallucinate a lot
When i was a teacher, my students often hallucinated in their answers, it’s diffucult to say “I don’t know” at exams.
Now, their hallucinations goes to second meta-level. AI hallucinates in their term papers, and students hallucinate when try to explain AI’s hallucinations.
As someone who recently experienced (non-AI related) psychotic hallucinations - the correct term for non-sentient beings is ‘delusions’
A hallucination needs a mind to happen inside, it often involves a feeling of being observed, watched, talked to, or in a being’s presence. Delusions pertain to facts, hallucinations to ‘subjective phenomenon’, which is to say that usually there is more to interact with than truth values of statements