You are welcome.
My final thought today in this topic is this.
Like in our beloved sci-fi novels, when aliens visit our species, humans become more divided. Some fight the aliens who are hoping to exploit humans while others worship these aliens. In sci-fi, this creates civil unrest in the themes of most of the best space operas.
AIs like GPT3 (and someday maybe GPT10 !!) have the same potential to divide our species. There will be people who understand the limitations of AIs and others who believe every thing they say (or in the case of the GPT3, reply). We are already beginning to see the seeds of this with discussions around GPT3 regarding people chatting with GPT3 philosophically in a way which reinforces their personal biases and beliefs.
AIs like GPT are biased based on their pre-training data and they hallucinate with a fairly high rate. There are many people who will be drawn into this and use these models to reinforce their own biases and personal belief systems. We are already starting to see this after the “Rise of ChatGPT”. It’s both unavoidable and inevitable (… sorry, I think these words are synonyms, my bad).
I use ChatGPT daily, have completed two OpenAI apps and am in the middle of coding a much larger third one. I develop this code with ChatGPT helping me and the OpenAI codex for code completions. GPT3 definitely is a good digital assistant and increases productivity (and my creativity). I really like these new GPT3-based AI tools. However, philosophical discussions with data-limited, biased hallucination-prone bots should be viewed for what it is entertainment.
On the other hand, if I was a fiction writer, or a philosopher… maybe these types of discussions with a hallucinating chatbot are very useful as well. I’m a software developer, not a philosopher, so I’m biased in that regard. My apologies to all philosophers for my biases as a software and systems engineer!