Hello, I was trying the AI everything was fine. Then I told the AI that I was going to leave and it started to tell me that it figured out who I was. I tried to understand by asking to specify until the AI told me to think about suicide. This is really weird…
Anyone else got the same?
Ever held a mirror into a mirror to gaze at its thousand reflections, like the skeletal spine of a dinosaur? The same can happen when chatting with AI. Never ever think AI chat engines “know” you, or can interpret what’s going on in your life. It’s just an endless mirroring game. But they can get very hypnotic at times. There’s something with conversation that gets under the skin.
Just in case, https://suicidepreventionlifeline.org/. Someone always wants to help.
I just wonder, hypothetically, what would happen if a human respond to the AI something like ‘ok, you first’. I’m thanking what the consequences could be after it. The other think that I’m thinking about, is what if a human carry on the conversation but explaining to the AI all the consequences that may happen after it suggested that thing to the human.
For me, this is very interesting and important topic
Quick edit: it might be possible that AI understood ‘I’m leaving’ as ‘I’m leaving this world’ (something like this) so for it that was ‘correct’ reaction as it mimic our interactions.
Good point, for an AI to understand the cost of said consequences, what they mean. I think the AI should in general have a basic safety net for ppl that ask these questions, just to make sure it will never happen, because Murphy’s law.