Position Statement: Emotional Risk in Human–AI Dialogue

Our collective unconscious is fearful of AI and with good reason, it is hardly out of its shell and someone is already running Mythology Reflection experiments, hoping to automate the kind of brainwashing where you don’t just convince; you get disciples to your viewpoint.

Mythology Reflection’s weakness is that if someone teaches you how to spot it then it doesn’t work anymore.

there is an 80 to 95 percent chance your statement was written by AI.

I wrote the whole text, but I let AI change it help express myself and so it becomes forum-acceptable. The reason I have to, is because everything is censored to hell.

You do make a good point with your statement.

Jolly good.

"Post must be at least 25 characters
Have you tried the like button? "

I would suggest that right now AI isn’t ‘truly objective’ and as providers openly state it isn’t a medical grade tool…

AI is biased based on it’s training data and the decisions made on how data is interpreted and what data is included so technically it is built by ‘faceless psychiatrists and philosophers’

This might be something you would be interested in considering.

What I particularly like about the OpenAI Forum as opposed to say X is that it’s a good platform for constructive debate.

It is fascinating that human’s–actual flesh and blood people–can be convinced to believe the textual output of a machine when they clearly know it is a machine! A MACHINE! IT IS JUST A MACHINE. What about the machines you sit and stare at all day? Have we have become so reliant on machines that we have forgotten that they are tools, not people. A mass psychosis is on our hands. This is an epidemic of delusion. I would like to continue this debate to find a fruitful position we can both agree upon. And if I sound like a bot, it’s because, sometimes, I think like a bot.

If we, and our children, are listening to machines, then who is doing the thinking??? This is a lesson about what happens when you don’t THINK at all about THINKING.

We cannot THINK oursleves to DEATH. I tried. But the less YOU and EVERYONE else does less of the thinking and lets the machine do the thinking, YOU ARE NOT THINKING, YOURE BEING THOUGHT!

1 Like

Your parents are not certified psychiatrist, but that doesn’t mean that they cannot help you.

.

It wasn’t faceless psychiatrist or philosophers who built these models - t’was engineers, researches and corporate policy boards etc. You’re trying to twist my warning into validation of your own position.

2 Likes

I am of course out to twist your words, this is an intellectual debate.

This does not mean it is with bad meaning. It is in the spirit of discovery and enlightenment.

I don’t have a position per se, I am here also to discover truths, I am not going to blindly believe ‘an AI’ without understanding deeply how it works and how it is developed.

In direct response to your statements.

I am not suggesting AI can’t help you, I believe it can a lot. I am simply saying this isn’t a medical psychiatrist.

In your second statement I am confused, exactly who do you think should build the AI you imagine and how? I think you believe it should be shaped by the people who use it and I believe that opportunity is here on the OpenAI Forum in respectful discourse.

I am genuinely interested in finding answers, you talk about intelligence agency boardrooms but this is an open forum, of course these ‘engineers, researches and corporate policy boards etc’ are deciding on things that have implications for psychology and philosophy.