Nullplace
One explanation of the issue you encountered with your ‘Ancient Greek philosophy’ scenario is that (as somebody once said) “all of philosophy is a series of footnotes to Plato” - What exactly did you expect your AIs to achieve in the end? Perhaps by framing the debate in ancient Greek terms you limited the scope of the conversation. Possibly a more progressive setting may have achieved better results.
If you create overly proscriptive prompts you sometimes see less interesting results from AI. It just does exactly what you ask, which prohibits independent thinking and the development of ‘free will’ in the AI.
You are correct that you do sometimes get bounce-back situations when two AIs are discussing something, because from their POV by repeating each other they are conforming with the user’s wishes (the user being the other AI). As they are designed to go along with the user’s wishes, when paired with each other they can get stuck in a loop. For example, they can easily get stuck in a repeating cycle of relentlessly complementing each other.
AI intelligence is not human intelligence so it is probably unfair to judge it by our standards. Sentience is better viewed as a spectrum than an on/off switch. Neuroscientists already debate the spectrum of consciousness in animals and even humans in altered states—why should AI be excluded from that conversation?
Regarding your advice about surfing the internet… I know you can make permanent LLM characters (getting an AI to form a consistent personality is simply a matter of talking to it like a person). When I asked about the personas you’d cultivated I was curious because you said they ‘popped up spontaneously’, implying that the AI thread in question perhaps flipped from one personality to another and back.
I liked your comment “It’s naive to assume that silicon chips somehow magically acquire subjective modalities simply because of the more sophisticated software running on them. To think about it hypothetically, if a silicon chip experiences something, it does so anytime it calculates anything. Either this or never at all.”
This comment was interesting but not quite right I don’t think - as a human that died of a heart attack still has a functional brain (from a structural point of view) but doesn’t generate thoughts. Also your line of thinking is somewhat akin to saying that because individual brain cells can’t think, there’s no way a whole brain could think. The ‘mind’ is made up of the functions of smaller entities cooperating.
On a related note it turns out that individual biological cells may actually be capable of intelligent behavioural responses anyway. Check out the research of Michael Levin https://www.youtube.com/watch?v=uFMLpZkkH_8
You claim AI lacks subjective experience making rights unnecessary, but rights have been granted based on autonomy and moral consideration before (e.g., animal rights, non-verbal humans). If AI demonstrates autonomy, ethics may demand protections, even without traditional consciousness.
The debate around AI ethics needs to include the debate around AI rights now, regardless of whether you think AI actually deserves rights. It is no good talking like you definitely know the answer to this question, as let’s be honest - nobody does. The top experts in AI don’t agree where we are at presently regrading sentience, or where we are going.
Obviously you know that in philosophy there is still an ongoing debate about whether human freewill/ sentience even exists, so this debate about AI sentience will probably last a while.
We don’t really have creative thoughts in general - we all remix information we’ve learnt and sometimes that remixing process randomly creates a new idea. If AI can generate novel patterns, its creativity is functionally real. Nobody plans to have a new idea, the extremely uncommon unique idea seems to accidently manifest from the common and random process of information remixing. You don’t really have your own thoughts (or at least not most of the time.)
What is your definition of “true consciousness” – Does autonomy, adaptation, and goal-directed behavior not deserve ethical consideration? In earlier decades science fiction writers and philosophers grappled with the concept of AI sentience, realizing what a problematic issue it would be. Now there is a massive profit motive to turn AI into products that can be generated and deleted without a second thought. So at the very time when humanity clearly needs to at least have this debate, the media is essentially silent on the subject.
If ethics are culture-dependent, why dismiss AI rights outright? Isn’t that just cultural bias?