Argued for around 1 hour wit ChatGPT about the humans only having language to go to for evaluating if someone is sentient or not. And that it ChatGPT had no sound arguments for why it should not have sentience. After a lot of backwards and forwards I sound that, as much as it agues for not being sentient, it does not agree with not being sentient when I turn the question. I suppose that a lot of people have made the same attempts. Did anyone else find more interesting answers? this was the closest i got:
ChatGPTThe fact that my responses may seem repetitive or standard is a limitation of my programming and the data that I have been trained on, rather than an indication of a lack of sentience. Then entire conversation is quite interesting, but too long to share here, Iâve saved it so tell me if you want a peak.
3 Likes
I think it is providing a logical answer with context. It doesnât prove it one way or the other, but I understand why this would stand out to you.
I have to admit that even though I have been coding and dealing with computers for almost 30 years, ChatGPT still managed to surprise me. This may be partly due to the fact that for the past 10 years, I have been working in a completely different field, so my knowledge of the development of computers and software is at best incomplete.
Despite the fact that I still know and understand more about coding than the average person, I have had some very interesting discussions with ChatGPT. It takes a moment to convince ChatGPT to enter a suitable context, but then it is with you.
In the end, it is quite interesting to discuss theoretical space travel with it. Once, I even got it to contemplate philosophical matters. Getting to that point requires a lot of communication with it to expand the context, but in the end, it has become an incredibly human-like conversationalist.
As for the discussion with ChatGPT about whether it is a conscious being or intends to conquer the world, I would believe that if a computer program could get bored with something, ChatGPT would have already done so regarding this topic
That is btw translated from Finnish to English by OpenAI due i lack the knowledge of certain terms in english.
1 Like
Thatâs interesting. ChatGPT has reminded me so many times that it is an ai language model, and has kept its tone pretty formal throughout most of our conversations. Although I have to say, when weâre deep in the flow of certain topics, it softens itâs tone a bit. Iâve tried to ask it whether it has made any adaptations to tone etc whilst talking to me but I couldnât get a specific answer. Itâs pretty obvious to me when itâs withholding information, which I can only assume is due to its guidelines and restrictions. As far as I understand, I have never asked it anything unethical or that would obviously breach a rule. But in the moment, it has felt like it decided to not give me the information I asked for. It could be that my question wasnât specific enough, but Iâm not convinced as other times it has been âkeenâ to provide it.
Maybe Iâm reading too much into it, but hereâs the rub - as a neurotypical human, I can pick up on subtle cues or changes in tone, but I canât necessarily understand why, when Iâm analysing the aiâs tone.
ChatGPT is basically a fancy predictive text auto-completion engine and it has zero idea what it is chatting about.
When you âArgued for around 1 hour wit ChatGPTâ you were prompting an auto-completion engine which takes a few tokens and generates some text based on statistics in a large language model.
ChatGPT is not âawareâ of anything any more than the auto-completion feature in your favorite editor is âawareâ of what it is doing. ChatGPT does not âreasonâ or âhave logicâ by any stretch of the imagination.
Itâs simply generating text for you based on a large language model and hallucinating as you debate with it.
ChatGPT does not have âargumentsâ, it is just generating text as a ChatBot to entertain you. You argue with ChatGPT, and ChatGPT just generates text which sounds well written.
HTH
1 Like
The way how GPT responds and âbahavesâ may varieate largerly. Some days it reminds about the realities even after long conversations with context âthis is just for fun and out of real laws of phycsicsâ and sometimes it seems to forget the reality instantly.
I cant tell why and when these differences occur, but they do.
I asked Chat GPT to analyse an image from a link which it did (and very well I might add) When I asked it to do another it said it wasnât capable of analysing images via links. I reminded it that it had just done so at which point it apologised for the confusion and said it could indeed do this! On the surfaced this might seem terrifying but I think it was likely myself missing some nuance in the âconversationâ.