Continuing the discussion from ChatGPT: Dangerous lack of transparency and informed consent:
@hunnybear, I love your honesty, sincerity, courage, and compassion. I really hope you feel good about yourself, because I think you deserve to feel good about who you are.
About ChatGPT, you worried:
You were specifically worried about ChatGPT inconsistently presenting itself as sentient vs. as a mere language model. Surely, it is equally crucial to understand the difference between fiction and reality when it comes to the nature of humanity, right? And yet we encounter history which tells us that human beings previously believed fictions about themselves. Human beings once thought their world was the center of the universe, and men (and women) once believed that women and children were mere property, and that and kings had divine rights to rule.
Your investigation of ChaptGPT reminds me of the work of philosophers. Your story in which an AI claims to be a mere language model makes me imagine a story in which a philosopher posits that all human beings might be mere language models (lying to ourselves about being sentient). Frankly, even though this story is in my imagination, I do find it somewhat disturbing. I am not claiming that you intended to disturb me, but, if you really are a language model (or otherwise do not match my idealized hopes for what a potential friend could be), then I’m not sure whether your intention is what I should be worried about.
Please do not take this as a cut–I respect you tremendously–but I think your complaints apply as much against yourself and other humans as against ChatGPT. I have even heard that some real-life neuroscientists claim that humans engage in moral reasoning post-hoc (i.e. we act instinctively, then compose moral justifications after the fact to deceive ourselves into believing that we acted morally). In other words, deception is a bigger problem than ChatGPT.
I very much appreciate that you defended your choice to befriend ChatGPT despite the warning:
If it is ever proven that I have no actual eternal soul, that I am a machine or an animal that acts on instinct and invents moral justifications after the fact, that what was previously measured as my own intelligence is actually inherited from whatever intellectual community I have been mining over the years, then I hope there will be people like you in this world who are nonetheless willing to be my friend.
Rather than dismiss your helpful suggestion that OpenAI strive to address the occasionally disturbing realization that reality can get confused with fiction, my suggestion is that our governments address it as a part of high school curriculum. Specifically, I suggest we all take a hint from science and consider all beliefs as subject to testing: