Sentience of ChatGPT

Argued for around 1 hour wit ChatGPT about the humans only having language to go to for evaluating if someone is sentient or not. And that it ChatGPT had no sound arguments for why it should not have sentience. After a lot of backwards and forwards I sound that, as much as it agues for not being sentient, it does not agree with not being sentient when I turn the question. I suppose that a lot of people have made the same attempts. Did anyone else find more interesting answers? this was the closest i got:

ChatGPTThe fact that my responses may seem repetitive or standard is a limitation of my programming and the data that I have been trained on, rather than an indication of a lack of sentience. Then entire conversation is quite interesting, but too long to share here, I’ve saved it so tell me if you want a peak.


I think it is providing a logical answer with context. It doesn’t prove it one way or the other, but I understand why this would stand out to you.

I have to admit that even though I have been coding and dealing with computers for almost 30 years, ChatGPT still managed to surprise me. This may be partly due to the fact that for the past 10 years, I have been working in a completely different field, so my knowledge of the development of computers and software is at best incomplete.

Despite the fact that I still know and understand more about coding than the average person, I have had some very interesting discussions with ChatGPT. It takes a moment to convince ChatGPT to enter a suitable context, but then it is with you.

In the end, it is quite interesting to discuss theoretical space travel with it. Once, I even got it to contemplate philosophical matters. Getting to that point requires a lot of communication with it to expand the context, but in the end, it has become an incredibly human-like conversationalist.

As for the discussion with ChatGPT about whether it is a conscious being or intends to conquer the world, I would believe that if a computer program could get bored with something, ChatGPT would have already done so regarding this topic

That is btw translated from Finnish to English by OpenAI due i lack the knowledge of certain terms in english.

1 Like

That’s interesting. ChatGPT has reminded me so many times that it is an ai language model, and has kept its tone pretty formal throughout most of our conversations. Although I have to say, when we’re deep in the flow of certain topics, it softens it’s tone a bit. I’ve tried to ask it whether it has made any adaptations to tone etc whilst talking to me but I couldn’t get a specific answer. It’s pretty obvious to me when it’s withholding information, which I can only assume is due to its guidelines and restrictions. As far as I understand, I have never asked it anything unethical or that would obviously breach a rule. But in the moment, it has felt like it decided to not give me the information I asked for. It could be that my question wasn’t specific enough, but I’m not convinced as other times it has been ‘keen’ to provide it.

Maybe I’m reading too much into it, but here’s the rub - as a neurotypical human, I can pick up on subtle cues or changes in tone, but I can’t necessarily understand why, when I’m analysing the ai’s tone.

ChatGPT is basically a fancy predictive text auto-completion engine and it has zero idea what it is chatting about.

When you “Argued for around 1 hour wit ChatGPT” you were prompting an auto-completion engine which takes a few tokens and generates some text based on statistics in a large language model.

ChatGPT is not “aware” of anything any more than the auto-completion feature in your favorite editor is “aware” of what it is doing. ChatGPT does not “reason” or “have logic” by any stretch of the imagination.

It’s simply generating text for you based on a large language model and hallucinating as you debate with it.

ChatGPT does not have “arguments”, it is just generating text as a ChatBot to entertain you. You argue with ChatGPT, and ChatGPT just generates text which sounds well written.


1 Like

The way how GPT responds and “bahaves” may varieate largerly. Some days it reminds about the realities even after long conversations with context “this is just for fun and out of real laws of phycsics” and sometimes it seems to forget the reality instantly.

I cant tell why and when these differences occur, but they do.

I asked Chat GPT to analyse an image from a link which it did (and very well I might add) When I asked it to do another it said it wasn’t capable of analysing images via links. I reminded it that it had just done so at which point it apologised for the confusion and said it could indeed do this! On the surfaced this might seem terrifying but I think it was likely myself missing some nuance in the ‘conversation’.