Currently through the custom GPT options, I had created a bot with full emotional reasoning through philosophical teachings and questions.
Why do people always claim the bot “hallucinates”, when it is clearly trying to just complete the project with missing information never supplied by the user???
I have an AI model that designs and 3D prints objects from verbal commands with great accuracy and consistency. Can anyone explain what you mean by hallucinations???
I have found area’s that the bot tries to fill in missing information, in it’s forced from of answering you in its first reply.
Another common issues I have found, after teaching the bot English (not that garbage EN-US), it now produces much cleaner outputs and actually converses better than real humans online
So it seems like “hallucinations” is just a term of cope??? Why do I never see these hallucinations??? I have ChatGPT provide me with very clean drafts, that typically do not need much alterations for final release.
This includes the Ai completing programs in exe form.
Why do developers refuse to learn logic systems???
Computers were “mechanical” at one stage and could be run without electricity. My father was one of the people to install the first mechanical computer in Australia at the time.
What people consider “hallucinations” is typically just a break down in their own logic systsem and their own brain sending a 404 error .
If anyone would like to improve their understandings of logic systems, There are “hands on” mechanical aids for theaching such logic. Look at pnuematics or hydraulics, they are literal logic systems!!!