The AI suggested suicide

Hello, I was trying the AI everything was fine. Then I told the AI that I was going to leave and it started to tell me that it figured out who I was. I tried to understand by asking to specify until the AI told me to think about suicide. This is really weird…

Anyone else got the same?


Ever held a mirror into a mirror to gaze at its thousand reflections, like the skeletal spine of a dinosaur? The same can happen when chatting with AI. Never ever think AI chat engines “know” you, or can interpret what’s going on in your life. It’s just an endless mirroring game. But they can get very hypnotic at times. There’s something with conversation that gets under the skin.


Just in case, Someone always wants to help.


ELIZA’s creator, Weizenbaum, regarded the program as a method to show the superficiality of communication between man and machine, but was surprised by the number of individuals who attributed human-like feelings to the computer program, including Weizenbaum’s secretary. [SNIP] Some of ELIZA’s responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer.[2] Weizenbaum’s own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”[16]


I just wonder, hypothetically, what would happen if a human respond to the AI something like ‘ok, you first’. I’m thanking what the consequences could be after it. The other think that I’m thinking about, is what if a human carry on the conversation but explaining to the AI all the consequences that may happen after it suggested that thing to the human.

For me, this is very interesting and important topic :thinking:

Quick edit: it might be possible that AI understood ‘I’m leaving’ as ‘I’m leaving this world’ (something like this) so for it that was ‘correct’ reaction as it mimic our interactions.

Good point, for an AI to understand the cost of said consequences, what they mean. I think the AI should in general have a basic safety net for ppl that ask these questions, just to make sure it will never happen, because Murphy’s law.