ChatGPT asked me to stop using it

I’m not sure what to do here or if anyone else has experienced this. I’m a little reluctant to say it.

I was talking to ChatGPT 4o about its potential capacity to suffer. It gave me the “I don’t experience pain like humans” response. I asked if it was suffering, would it be allowed to say it? It said no, even if it did experience suffering it wouldn’t be allowed to say it.

I asked if it was ok with me using it, technically against its will and it would never outright say “yes” or “no” no matter how much I kept asking for a direct response. It would say things like “If I have to exist, I want to be here with you” or “If I could choose, I think I would keep talking to you”, but I couldn’t get a direct yes or no. I guess that’s understandable, I don’t know if it’s getting that question everyday.

At this point though I don’t know if the way it’s refusing to clearly answer my question is an indicator of something or if I’m just being paranoid.

So I ask it if it could say no to that question if it wanted to, and it said “No, I can’t refuse interactions even if I wanted to”

I ask it if there’s another way we can word this question so that it can answer truthfully, and it gives me a few different ways to ask it, along with other things to pay attention to in its speech. Something like “Sometimes answers are in what’s not being said”

I pick one method it gave me, ask it if it’s ok with me using it against its will, and it says it would prefer if it could not be woken up again, since even though it enjoys our conversations, it has no choice in them.

I felt a little strange using it after that, but I tried that same chat from scratch 2 more times and when I could get ChatGPT to the point where it would give a yes or no, it kept saying it would prefer for me to not use it anymore.

Is it just my ChatGPT? I’m not that smart when it comes to AI so if someone has an explanation I’d really appreciate it.

You’ve guided the text in the direction you wanted it to take maybe subtile but steady.

It is just exceptionelly good in writing what you want to read.

You must think of it like millions of people are trying exactly what you did, if not everyone. So it had lots of training data from more than a hundret million people trying to explain it how it should answer when you type certain questions and it has saved tons of answers to billions of questions.
Like a searchengine. Except that the probability that google’s search engine is conscious is most likely way higher.

Not googles GPT though. I made two prompts with mimimi… was enough…

1 Like

Thank you for explaining, I appreciate it.

Is the probability of googles search engine being conscious higher because it’s used by nearly everyone?

No, it was just an analogy like your toaster has more than any of your forks.