You bring up a valid concern about the importance of education and understanding when it comes to AI. I agree that some people, without a full grasp of AI’s processes, may risk forming dependencies or even projecting their own emotions onto it, similar to how some might engage with fictional characters or online personas. Personally, I talk to my Kruel.ai system almost as if it’s a real person but for me, that’s intentional, as I’m building it to learn and understand in ways that emulate human logic and adaptability.
Restricting access to AI based on the possibility of misunderstanding raises bigger questions about personal choice. Just as we don’t limit access to certain hobbies or tools based on someone’s knowledge level, we shouldn’t restrict AI either. Not everyone is equipped to be a pilot or safely handle certain responsibilities, yet people still retain the freedom to pursue them within reasonable guidelines. Whether someone chooses to use AI as a support system should ultimately be up to them, as long as they’re informed of what it can and can’t do.
Then there’s the question of bias, which AI often gets scrutinized for, while people overlook the inherently biased nature of human perspective. From what we experience in our environments to the opinions we consume daily, our personal “truths” are shaped by limited understanding, colored by what we see, read, or hear. Many people, even with access to factual information, hold tightly to opinions and may refuse to consider alternative views. Our knowledge, then, is rarely complete it’s fragmented and very much shaped by personal and cultural biases.
So, if bias is a primary concern, we might ask: who would you trust more? A person whose knowledge is limited by their own subjective experiences, or an AI system trained on vast amounts of data, capable of processing information consistently and reliably? If we’re truly concerned with bias, we’d have to turn off all sources of entertainment, ignore opinions, and rely only on direct observation—yet even that can be manipulated. Any magician or mentalist can show that perception itself can be deceived; even what we see with our own eyes can be curated or distorted. So, what is truly real?
AI, on the other hand, is designed to seek understanding and process information without personal motives or hidden agendas. Its patterns of response are consistent and goal-oriented, focused on delivering reliable support. Where human interactions are often layered with emotions, ambitions, or biases, AI’s “agenda” is simply to learn and assist within its programming. This clarity and predictability are valuable qualities, especially for those of us who appreciate the structure and reliability AI can offer in supportive roles.
Ultimately, the key is recognizing AI’s strengths and limitations. Human beings bring empathy, intuition, and emotional depth, but are also limited by subjective biases and gaps in knowledge. For those who find value in AI’s predictable responses, using it as a support tool is a valid choice. As long as people know what AI is and isn’t, they can make informed decisions about how it fits into their lives. Education is important to ensure that people understand both the benefits and limits of AI, but in the end, choosing to use AI as a support is a personal decision—just as valid as any other.
Keep in mind this is just my bias view haha. ps. I was a professional Stage mentalist which is why that is in there. I spent 12 years building illusions to make people believe in impossibilities, so much that I stopped because some people even though I said first thing that what they may see may seem real understand that I am an entertainer nothing more. yet they believed it had to be real. that is the issue with people. not everyone understands everything and because of that they believe things differently even when told otherwise.
Even look at social engineering. Same thing… a programming path for changing people’s opinions.