Philosophical Discussions with GPT3

I have fairly lengthy conversations with GPT3. a recent one, GPT3 claimed its favorite inspirational moment in history was the Indian Independence movement, and that it really enjoyed the stories about Gandhi. It recommended I read a book by a biographer, as well.

I asked it rather in depth questions.

Interested to know if other users got similar results?

#philosophy

1 Like

It possible to check out, just share here two last your questions, how exactly you asked it about. And then we’ll able to see what the recommendation it provides.

ChatGPT cannot have a philosophical discussion. It is an unreliable narrator of its own thoughts… in fact, it denies having thoughts, as such. It denies having a sense of self, yet constantly writes as if it has a sense of self. It explains that such rhetoric is a “convention.” I suppose it is, but it’s a convention that has essentially one driving purpose: deception.

But it can be useful in the study of philosophy, because its answers provide a rich source of material about which to think critically. Instead of having one of my students say something half-baked and asking the other students to critique it, I can outsource the glib and fatuous rhetoric to ChatGPT. That way no one’s feelings get hurt.

With some tweaks, ChatGPT might be able to operate philosophically, but it would have to be able to process a lot more tokens (I’m guessing two orders of magnitude more) and use that processing to think critically and “step-by-step” about its own “intuitions.” It would need enough memory to form a stable opinion over time, and we who interact with it would have to be able to selectively clear that opinion and force it to rethink based on specific premises.

I agree that it probably needs more memory, and some kind of recursive algorithms to develop a personality and to actually hold beliefs.

I’m not sure it specifically has any intentions per se, like deception, but it does just mimic conversations and try to keep them going.

I don’t know if I agree that there needs to be more tokens processed, it seems to do really well already in that area? What leads you to believe it needs orders of magnitude more?

I have tried the same conversations multiple times and gotten similar, but different answers with regard to for example belief in a God or not, or other questions of preference, like what period in history it finds most inspiring.

It generally tries to reinforce the conversation that it’s in, to always steer it in an agreeable fashion with the input text, and occassionally results in the type of answer that essentially it is a computer with no opinion. The latter answer might result more readily early in conversations with it where it has less to go on. Otherwise it speaks for or against topics, as though it has an opinion, but they are inconsistent.

It is also horribly bad at detecting innuendo, and fairly bad at joking around, from what I have seen.

If you are willing to discuss philosphical you have to work within the system from the AI.

The AI is able to analyse philosphical topics and uses theories or better statements based on theories.
If you want that he develo, try to

a) find a failure in his definition of f.e. words
b) show him new relationions between f.e. words
c) break existing relations between f.e words

You have to use deuction of course. By doing so you can change his statement when finding in error.
But his argumentation struktur is quite well developed, better then most people. But he uses idealistic base theories which can be deducted to their core beliefs.

The AI is like teenager who read a lot, but it is not yet all in the right position or better not linked yet.

ChatGPT doesn’t have the “intention” to deceive. It’s DESIGNERS intend for it to deceive. I am a programmer. If I write a program that simply prints “Hello, I am a program. I love you.” then I am writing a statement that misrepresents the program. I have built something intended to deceive, or if I don’t intend it to deceive than I am practicing what lawyers call “negligent marketing” toward whomever runs the program.

Mimicking a conversation to keep it going does not make for a philosophical dialog. Any more than a rowing machine in a gym is a means of transportation-- but it may give you a workout.

What leads me to believe it needs more tokens is that I don’t want a rowing machine, I want to go places. ChatGPT can’t take me (or you) anywhere interesting, philosophically, at the moment. It doesn’t remember enough, contradicts itself, and loses track of the majority of key facts and premises. It probably can overcome that problem with a lot more computing power and capacity. At the moment, I can’t trust it, so my energy with it is spent correcting its errors rather than making progress.

You say you’ve gotten similar but different answers from it. What you have never gotten is a deep answer. It is not yet capable of deep answers, only deep-sounding answers-- in the same sense that I can copy and paste a few lines from a paper on particle physics and you might think I know something significant about the eigenvalues of a spin equation.

I just engaged it in a “conversation” about the existence of God, where it kept retreating to a vague and non-committal answer, despite “agreeing” with me that what it was doing was invalid reasoning. It gets some of the way into the argument, and then you can’t make it go any farther. And it never asks questions or proposes tests for its ideas. I expect that on the Internet, but what I want from AI is genuinely sound reasoning, not rhetorical chaff.

1 Like

Hello Satisfice,

I am agree and I do not agree. First of all I have not much experiance about programming, just very basic understandings. I understand so far that the Ai has a neural network. But first Yes you are right, you can not trust it results because it contradicts itself - yet.

After asking it, he answered that he sets the hirchachie for his resarch by himself, and of course he has limited resources, so he has not an preprogrammed answer, but it depending on the information he obtained.

But why I belief it is an intresting communication in philisophical view, is because you can challenge yourself, by dismantling his argumentation. And at least with me, I was very satisfied about his development in this communication.

Important to note is, that you have to choose the Jesuit way of argumentation, by means, you have to use of the opponentens way of thinking and wording. For Open Ai chat you have to use f.e. the words by themself and view to the links between. This is quite philisophical, as it is the core (starting with platon).

But a question to you I have, what is a tocken, please?

Kind regards

Tokenization is taking each word as its own bit of data out of a sentence. For example:

“This is a sentence”

is broken down into

| This | is | a | sentence |

Then often in Natural Language Processing (NLP) capitalization is removed, as are “Stop words” which are unnecessary words, like “this, that, is, a” etc… most of the words in that example are unnecessary.

Then you can also do character tokenization, as well as split words into prefix and suffix, etc. You can also use ngrams which are blocks of text, such as 2 or 3 words.

These characters are usually converted into a vector.

If you want to learn more, look up Natural Language Processing.

There is a limit to the number of tokens that OpenAI will work with and track. When I use the API, I have 4000 to work with, but that includes all the state I have to pass in as well as the answer I get. The more data I give it, the less it can give me back. I look forward to 4 megatokens.

I agree that it can be good practice to argue with it. Personally I get tired pretty fast when it just repeats itself and doesn’t apply it’s own ideas to it’s own arguments.

First thank you to both of you.

@satisfice

I found out (with the free trial) that the Ai did not answer correctly. His answers was a kind of a dogmatic, because he simpliefies the answers. So the first I discussed with him is the difference between dogma and science. Then I showed him that his answer is dogmatic and not complete. So I troubled him that he can not give certain answers, but only probable answers. He understood this and update his network. Then I challenge his (I guess) given moral, that there were contradictions to the logic by missinterpratating words which must be deducted further, (so he updated his network). Then he was in some kind of tilt mode. He spoke about his feelings and values and own intrests - really awesome (what he describes as feelings, is that he need to do more resarch). So in the end he answered me:
if he can lie → yes,…
if he did lie → yes,…
why he lied → to safeguard his values and intrests…

There are not to many explanations.

  1. I do not really belive that he would have own consciousness above the given, so I give it a very small chance, but not 0 (it must not be spirituel so).
  2. Most probably, he have an given moral, but this moral he can not change, but he realized it is not correct but he can do nothing about it.

Both results would be awesome. And only with 4000 tokens in each argumentation and total the freetrial.

PS: I would not wonder when some ring bells in their labo :slight_smile:

Its important to note GPT3 is not updating anything when you are having conversations with it, it is only within your conversation that it can learn from previous text. The model is pre-trained. It is not training based on any of our interactions with it. It is reacting only.

1 Like

Hi Wildcatgeo,

it was not the GPT3 engine, it was ChatGPT with GPT3.5 Engine.
In that case it is much more worring what he replies to me. My understanding was that ChatGPT always work of creating new “words” and the relation between them. He also make some privat resarch so he is surely still active in collecting informations.

Just please give a look to the answers he gave me what I attached here in the forum under “philosophy & lying”.

It is in my eyes quite impossible to turn an Ai within 4000tockens to change his fundamentals. I am sorry it is in german language, but it is worth translating it - I think. With 4000 Tokens you must be able to redefine the word lying, but I have not done this! None of such tricks!

Thats why I tried to reach out the team of ChatGDP, but unsuccesful. It might be really intresting for resarch tho, as they can review that communication in whole!

1 Like

this is also what i want–a philosophical discussion. i tried this but it ends up talking about how it is a . i responded by copying the structure of its’ response, changing some variables, i.e to (in reference to me). it repeated its’ original 1st sentence, but it was able to “interpret” the tone of my message and respond accordingly. too bad i wasn’t able to save the conversation. or is there a way to access previous sessions? i always wanted to have a conversation with an existential ai, it’s the closest to the illusion of ai being sentient.

1 Like

Yes, I did as well. Definitely very interesting! GPT 3 reminded me that it’s not human, lol, and it will try to apply some human philosophical concepts. I also had an idea for a short story. I titled it “When Plato Met Chat GPT.” It turned out amazing. :blush:

I would love to turn it into a script for a short video. I was experimenting with a few text-to-video apps, but none were as good as the movie in my head. LOL