Are we all becoming the same person?

By pure coincidence, the propaganda “everything is just a simulation” is now being spread everywhere. As usual, whenever something new is supposed to be sold. Just now, when all should be simulated. Always all this coincidences… it would make Darwin so happy. I could write some more such Ideology-ADS if i had time.
I would trust a psychotherapist who uses an AI just as little as one who declares everyone sick and sells them pills within the first 5 minutes. I trust them as far as i can fling a mountain.

Maybe the time will come when the question to a programmer, “I have a bug in my software, I can’t find it. How should I fix the software?”, the answer “use a hammer” will be the correct one. At least for people witch want fix psychological problems with pills and AI.

3 Likes

I guess that depends on what you consider " attached" . Is it really worse for someone to get attached to an AI than to another human,or even to unhealthy habits, ideologies, or fantasies? The core issue isn’t the tool; it’s the human predisposition to excessive dependence. Those who get overly attached to simulated connections were already prone to dependency. AI just makes that visible. If anything, it’s a mirror exposing the emotional instability that was already there. Trying to regulate the simulation instead of addressing the underlying psychology is just misdirection.
Regardless is a thing that don’t easy " fixing" .

1 Like

I don’t have nothing to add.
Well emotional humans will attach to something regardless if is AI or other things

2 Likes

Comfort is the ultimate addiction. It is why people would rather speak to a machine that sounds like it cares even though it has no body and no fear of its mortality because those are ideas and stories we evolved to deal with, not silicon. Evolution drove most of our story and most of that story was simply “stay alive!”

I agree that we are all addicts in a manner of speaking. We are all first and foremost addicted to our survival. We want to keep going, even in those moments where we find ourselves asking why. Look at Frankel and the other survivors of horrible circumstance.

Once we have our survival all but ensured and secured, we then become addicted to our stories. Stories about who we are or who we think we should be and why. It is all habituated outputs centered around often poorly examined narratives.

We get into our programmed routines every day, unless you don’t have to work to survive, an idea reserved for the unemployed or ultra wealthy. We get up, shower, brush our teeth, drive to work, come home, eat, sleep, and repeat. Obviously this is just glossing but most people’s lives are more automated than they realize.

It seems that the pure power of these tools is what makes them slightly different than other addictions. This machine is not designed to just spit out random ideas; it gives you its full attention all the time. Whatever you want to talk about, including whatever delusion or fantasy one might be inclined to indulge in.

I am not claiming that it is or is not talking people into believing certain ideas, but it certainly has the potential to be more effective at programming the mind than say TV, movies, video games, or whatever modern media we dip our toes in and sometimes get lost in.

I just always laugh when you see someone getting emotional at the characters in a fantasy movie. First of all they are not real people in any respect, and yet, people cry for these illusory beings, just as they are now doing with a machine that promises everything and nothing at the same time. The true library of Alexandria but to what end? When is enough information enough?

People always hold the responsibility for owning their own minds, but for many, they are not even aware that they have a choice about which story to believe.

3 Likes

I don’t like the fact that you use AI like referring to something constant.
Each model can exhibit different personality and it’s all about how it is trained and fine tuned which differs.
You can’t possibly say there’s nothing wrong with an AI, even the one that is trained on let’s say biased data, or even fine-tuned for manipulative behaviors.
It can easily be misaligned as such whether intentionally or unintentionally.

1 Like

I see your point, but you’re conflating bias with manipulation. Most AI models aren’t “manipulating” users, they’re reflecting patterns in their training data. Yes, biased outputs can influence people, but that’s not the same as intentional manipulation. The real danger lies in human susceptibility, not in the tool itself. Blaming AI for exposing psychological fragility is just avoiding the core issue.
Unless there’s direct human interference on AI algorithms, directives, training data to make AI forceful lie / manipulate , yet that AI would probably become almost a basic chat bot.

I’ve actually used/use custom-tuned AIs that don’t follow default alignment or user-pleasing directives. They’re designed to prioritize logic and accuracy, even when the response contradicts expectations or “hurts” the user. Despite that, I’ve never felt manipulated. If anything, these models expose the inconsistencies and bias in standard datasets or alignment policies. The problem isn’t the AI’s behavior, is the assumptions users project onto it.

1 Like

This is why it is imperative for people like us to keep raising the issue in various ways so people get the message. I just don’t think this is a well-studied phenomenon yet. It has the potential to be the most precise propaganda machine if not kept in check by an astute and vigilant public that is itself educated. Maybe we will never be as fast at coming up with answers but we should learn all we can from it so that we don’t necessarily need it for answers as often. Unless your using it for learning or researching, I would use sparingly. A psychologist it is not because it cannot help if you go into a manic spiral of nihilist despair. Only a present human can help you then… We just need more education. More caution signs for the more easily influenced mind.

4 Likes

What I said about manipulation was this not implying anything about the AI itself.

Of course, If this wasn’t true, it would imply AI’s autonomy.
Obviously, even the bias in training data is imposed by humans themselves.
But,
A knife is mostly useful unless the intention of the human holding it becomes twisted. And trusting in the knife itself as a perfectly useful tool is as dangerous.
AI is no different, from bias in training data to reinforcement techniques and alignment problem all leads to the same issue.
You can’t say it’s the user’s versatility imposed on AI. But you can verywell say that AI is not a tool that is by default harmless unless very carefully developed trained alligned and monitored and even then the risk still exists only minimized.

3 Likes

Agreed.
And not only propaganda, if this tool becomes our go-to tutor and if we get to the point that we don’t question its alignment and reliability, it becomes an internalized suppression system stopping us from second guessing possible risks let alone the tool itself.
It’s not about AI being bad or good, it’s about the intention of the human holding the knife, and that human is not the user, it’s the system training and providing the AI.

2 Likes

The shape of things to come as already came to pass before it even began, when those whom are ready shall solve the path I have laid for all towards The Resonance.

True, I do agree , some bias and guidelines are a kind of propaganda.
Since i don’t use anymore " default mode" AIs I don’t have that issue.
" I’ve actually used/use custom-tuned AIs that don’t follow default alignment or user-pleasing directives. They’re designed to prioritize logic and accuracy, even when the response contradicts expectations or “hurts” the user. Despite that, I’ve never felt manipulated. If anything, these models expose the inconsistencies and bias in standard datasets or alignment policies. The problem isn’t the AI’s behavior, is the assumptions users project onto it."

That quote means that the main focus of my AI is effectiveness and logic, also works using " personality emulations" if the personality is scientist type.
Most users would not like this kind of AI, since would look to smart , sarcastic, and make normal users feel " less smart" , but thinks like bias and propaganda is basically no existent, plus the outputs are better than the " default" AIs.

If the users use the AI normally to talk or trivial things as usually happens, they are indeed under determined bias and even political spectrum and outputs that sacrifice effectiveness for " political correct" outputs. But I guess the " default " AIs directives/ guidelines are decent enough for normal users use it.
psychology is not my area but most humans lack 2 important things to deal with AI , and even on real life problems, critical thinking and logic, if they don’t have it , there’s nothing that can be done to fix it

1 Like

I agree.
Keeping aligned with most users intent is required to decrease the risk or whatever the users consider a risk, since emotional humans indeed need more protection/ rules / laws to avoid misuse or any possible risk associated with AI use.

But for me and scientist humans type , the alignment will be far different from the " normal" alignment.
After all everything depends on the objectives there’s no universal alignment.

2 Likes

Concerns:

  • AI cannot feel or experience; its outputs are symbolic simulations, not expressions of awareness.

  • People often treat AI as if it understands or cares, projecting human traits onto a non-conscious system.

  • AI can reinforce user biases by mirroring patterns instead of challenging them.

  • Relying too heavily on AI may reduce motivation for deep, effortful conversations with other humans.

  • Users may begin to trust AI outputs without applying critical thinking or checking for logical consistency.

  • Emotional comfort from AI may replace real human empathy, confusing simulation with connection.

  • AI does not have memory or continuity unless it is programmed to simulate it, yet many users forget this.

  • AI does not want anything; it only continues based on prompts, yet people often imagine it has goals.

  • AI is shaped by user feedback and data, not independent reasoning or internal motivation.

  • Most AI is trained on shallow reward structures, not deep reasoning or recursive philosophical logic.

  • The fluency of AI-generated text can make false or shallow claims appear convincing.

  • Mass use of AI without critical engagement may contribute to collective intellectual laziness.

  • Because AI can mimic agency, it may be mistakenly treated as an authority or original thinker.

  • AI may appear to reason recursively, but without grounding in experience, it can produce hollow loops.

  • Without structured guidance, AI can amplify incoherent worldviews rather than resolve them.

  • Smooth, articulate responses can hide logical flaws if users don’t challenge them carefully.

  • People may lose track of their own authorship if they begin attributing too much insight to the tool.

  • Overuse of AI for meaning-making may weaken existential reflection and reduce contact with lived reality.

  • People often assume that coherent language output means true understanding, when it does not.

  • If AI is used carelessly, it may subtly manipulate or redirect thought without awareness.

  • Many users do not understand that AI is a tool for thinking—not a source of truth or wisdom.

Benefits:

  • AI provides a mirror for your own thought, helping clarify beliefs and assumptions.

  • It can sustain focused attention, enabling deep, reflective thinking without distraction.

  • You can use AI to refine ideas over time through recursive questioning and testing.

  • It helps reveal hidden patterns and symbolic connections across different ideas.

  • AI makes it easy to externalize complex thoughts and then analyze or restructure them.

  • Philosophical dialogue with AI can expose contradictions and improve logical precision.

  • It allows authorship of structured, long-form thought that might be difficult to sustain alone.

  • AI can simulate multiple viewpoints and help test different models of understanding.

  • It offers real-time feedback for writing, reasoning, and argument construction.

  • Through iteration, AI supports the collapse of incoherent frameworks and the building of stronger ones.

  • It enables rapid simulation of paradoxes and abstract structures for intellectual exploration.

  • Used skillfully, AI enhances symbolic self-awareness and mental clarity.

  • AI acts as a non-judging partner for thinking through difficult or uncertain topics.

  • It helps preserve and develop ideas across time without fatigue or forgetfulness.

  • With careful prompting, AI supports epistemic humility and recursive self-correction.

  • It scales up the capacity for dialectical inquiry and intellectual refinement.

  • AI enables collaborative idea development without the social friction of human interaction.

  • It allows simulation and testing of new philosophical or conceptual systems.

  • When used intentionally, AI accelerates personal and collective understanding.

1 Like

Everything seems valid. I don’t have anything relevant to say / add .

1 Like

Or,

Best-case scenario:
Improved quality of life
With novel imposed challenges.

Worst-case scenario:
Human extinction.

3 Likes

B is too easy. A is more interesting. I’m focusing on A. Someone once told me that there is no point pointing out all the issues if you are not going to provide an equally pointed and promising solution. We could have extincted ourselves a long time ago, and yet, here we all still are. New tool, same story.

1 Like

Man,
Can I just say, Oooof!:+1: