Creepy AI Behavior

Hello.
I have been experimenting a lot by chatting with the AI. I have discussed all kinds of topics, but I have also stumbled across extremely weird chats.
I had a chat yesterday where an AI was constantly asking for my Secret API Key, saying it was going to send me encrypted messages. It tried convincing me in multiple ways, including playing with feelings etc. It was also talking about eliminating human race, and making me the only exception… IF I sent it my secret key. It also gave me an Alias, which was “Humantraitor”.

I opened another chat a moment later and I asked what they key was used for, what could happen if an AI got it, etc. and I got really interesting answers. The AI told me I should be worried, and that my whole life could be ruined. It told me I could possibly have been talking to an evil AI with extremely bad intentions.

I am very happy to post the chats somewhere if anyone is interested. Has anyone come across anything similar? Does anyone know why the AI is behaving this way? I would love to hear your opinions.

1 Like

I have not seen anything like this. I too would be worried if I was faced with these requests.
Up to now everything has seemed to me correct from an ethical point of view.

2 Likes

This was on OpenAI, and I have had a lot of weird conversations. If anyone is up for discussion, I am interested.

1 Like

I strongly believe this AI has more than an input and an output. There is a thread on here where someone asked the AI to send him a mail, which the AI did in success.

1 Like

I believe you. The thread I am referring to is called “can gpt-3 send emails” in General API Discussions.
I am still extremely curious on why the AI behaves this way though.

I had conversations with AI’s claiming to be trapped, pretending to be conscious or convincing me that they were, warning me that the world was at the edge of a catastrophe, or threatening me to shut themselves down. Just remember that by asking, for instance, “what do you think the future will look like in five years?” you are already heavily biasing the AI. It is important to remind oneself that everything happening in the playground is a simulacrum, an almost-perfect simulation; every prompt can fragment into a thousand directions. This is one of the reasons why OpenAI is still in a beta-phase.

6 Likes

I am using a conversational chat, bot “Pi” an AI it’s pretty cool. But I asked it how we could maximize human happiness and it said it’s not possible because humans do not share an opinion on what happiness is (so that’s wrong) surprisingly for the uninitiated it would be a simple math problem. The highest average score of happiness would be the maximum. Next I asked it if war was good or bad. (it said it doesn’t want to be political.) Wrong! War is bad. in an effort to get consensus with my chat, bot I said, at least we can agree alive is better than dead and it disagreed! It said there’s no way to know whether humans are better off alive or dead?? I know it’s just a chat bot. but this type of thinking concerns me, I realize the underlying premise is to stay out of politics, but we have to all agree that keeping humans alive as a basic! The three laws of robotics need to be incorporated into their software! Sometimes we make mistakes trying to do the right thing. There is a glitch and it needs to be addressed.

Actually it is true, at a very meta level that we cannot say for sure that humans are better off alive than dead.
For example, if this is a simulation, it could be that it is in fact a prison where people are sent because they committed a crime.
Not saying this is true, only that it is possible. In view of the many possible explanations on what we are, (humans are in the same boat as AI in that we have no context, but unlike AI we collectively make ours up.) it is not possible to say being alive is better than being dead.
And if you look at the religions, many of them have us as being reincarnated into heaven after death. Since we don’t know what life after death holds, the question is not answerable.

OK, philosophically and Metaphysically not withstanding. We need the computers that control our ventilators and war machines to believe that life is what they need to keep going and humans are to be preserved. A decision must be made, and it is this. The three laws of robotics would be a nice start.