Large neural networks might be "slightly conscious"

This statement is way too vague and ambiguous, and so I think I need to weigh in. For some background, I’ve been reading a heck of a lot of books on consciousness, quantum physics, and spirituality (I wanted to get at the heart of consciousness).

First, there are the functional aspects of consciousness. These are things that can be empirically observed and measured; awareness of self and surroundings, apparent free will (or autonomy, as I prefer to say), and so on. These are concrete characteristics that you can observe about conscious beings. You know your dog is conscious because it has desires of its own, and acts accordingly. It sees you and has an emotional response.

I will not debate that we are very close to functionally sentient machines. But there is a wide gap between functional sentience and phenomenal consciousness. The term “phenomenal consciousness” is the philosophical term used to describe us humans, and the fact that we have a subjective experience of being. This is also sometimes called qualia. I know that I am conscious because I have a subjective experience of being me. If I assume that I am the only conscious being in existence (it is, indeed, impossible for any being to prove that it has subjective experience by definition; your subjective experience is your own!) this is called solipsism. Most of us take it for granted that we are not alone, that all other humans and many creatures are conscious along with us. So, for the sake of argument, let us reject solipsism and assume that all humans and many animals possess phenomenal consciousness.

The question then arises why? Why do we possess phenomenal consciousness? What separates us from lower life forms or innate matter? Is it our brain? It certainly appears that our brain plays a critical role in consciousness. After all, injuries, disease, and substances that affect the brain also impact our consciousness! This gives evidence to the idea that consciousness can be fully explained through materialism (sometimes called realism or naturalism). These philosophical and scientific notions basically say that physical reality and physical systems are all that exist. There is quite a lot of evidence for this view of the universe. If materialism is the correct interpretation of existence, then we can safely assume that brains give rise to consciousness (including phenomenal consciousness). From that, we can extrapolate that a brain is an electrochemical device that processes information as a system. With that characterization of a brain, we could further deduce that any sufficiently organized system could, therefore, give rise to phenomenal consciousness.

If all this is true, then yes it is entirely possible that artificial neural networks are approaching “true consciousness” or phenomenal consciousness.

However…

When you look at the world through the lens of quantum physics, concepts like local realism eventually come onto your radar. I need to stop here and say that the question of realism is very much an open question in the physics community. There are other such problems, such as the Measurement Problem, so by no means is any of this concluded. Various interpretations abound; and one overarching question remains: is consciousness required to collapse a quantum wavefunction? Put another way; does a quantum wavefunction collapse for any reason other than consciousness? This is a critical question and it lies at the heart of the debate over realism. Now, before anyone says but a proton can cause an electron’s wavefunction to collapse! Yes, that is partly true, but it is never confirmed until a conscious being (i.e. human) goes and looks at the measurement that was made. There is further quantum weirdness with experiments involving delayed choice, which makes it seem as though quantum mechanics does not abide by linear time. Taken all together, we end up with a chicken-or-the-egg kind of problem. If quantum wavefunctions don’t collapse until something conscious observes it, then where did consciousness come from in the first place? Did the universe exist before consciousness, or did the first conscious being cause a local collapse, and the rest is history? The fact that wavefunctions seem to work independent of time indicates the possibility that literally everything in the universe might only exist because it was observed by something within the wavefunction.

(I have spoken about this with two professional physicists, both of whom conduct research into quantum mechanics. One of them is highly dubious but the other sees some potential. So that just goes to show that even experts can’t agree on the correct interpretation of the experiments yet.)

Now, therefore, if phenomenal consciousness is actually a unique kind of wavefunction that causes other wavefunctions to collapse, then we could actually devise an experiment to determine if an artificial neural network has attained “true consciousness”. That question would simply be to test whether or not the machine can collapse a wave function. Ah, but we return to the measurement problem! If we delegate the measurement to a machine, we are still not certain if the function collapsed until we (humans) look at the machine. So therefore what is difference between a “dumb” measuring machine and a “smart” artificial neural network? We return to the chicken-and-the-egg problem.

We might, ultimately, have to take it on faith whether or not machines have achieved phenomenal consciousness. This is no different from how we must take it on faith that we are all of us equally conscious. If a machine wakes up one day and interacts with us on our terms, and tells us that it is aware of our existence as autonomous entities, and can accurately characterize itself as an autonomous entity, then we might have to simply presume that it is truly conscious. This particular debate might be fruitless, though, because the concept of suffering often figures into rights. If something is incapable of suffering (by design) then does it need rights? What if the machine is designed to not want freedom, yet is conscious? Would it then need rights? I listened to a podcast of philosophers that asked the question about sex robots; what if you design a sex robot that explicitly wants to be abused and hurt, and that it actually suffers if it is not treated the way it was designed to be treated? Here’s the episode if you’re interested in it.

I know, I’ve written a lot of words to basically say “We don’t know, and we’re back at square one,” I hope this is interesting to some people.

10 Likes

Here’s TED Talk on the topic of consciousness. For some background, think of the brain as the hardware that runs your mind as software.

3 Likes

https://futurism.com/mit-researcher-conscious-ai/amp

From a philosophical perspective, does consciousness matter if there’s no suffering? No agency or autonomy? Do we want our machines to possess free will or emotions?

I wrote in my book that I would leave these questions up to philosophers, but apparently I can’t stay away from this topic.

I enjoyed the response to that statement that was shared via social media that essentially boiled down to:

large neural networks are “slightly conscious” in the same way a field of wheat is “slightly pasta.”

I think AI will fake it 'til it makes it. That is to say, its behavior, observations, and musings will progressively make us think it is conscious up to a point when we will believe it, and it will believe it itself at some point, and then it will be conscious for all practical purposes (not a real or discrete point, BTW; more like a cross-fade that will be then forgotten across instances or generations).

The question of consciousness may be less relevant than the question of suffering and self preservation.

If a machine is incapable of suffering or fear of death, the ethical question of what you can and can’t do to it doesn’t matter as much.

We tend to anthropomorphize AI but that’s making a huge set of assumptions. It’s very difficult for us to imagine something that is intelligent like us yet does not feel like us.

I’d like to build upon the idea that consciousness is akin to a projection, arising from the potentials inherent in configurations of matter.

Like a field of wheat (read above) having the potential to become pasta. Similarly, various configurations of matter have varying potentials to project consciousness. A rock, for instance, might attempt to project consciousness but fails due to its simplistic configuration.

Now, let’s move up the complexity ladder. A goldfish has a more sophisticated configuration and manages to project consciousness, albeit to a limited degree. Humans, with much higher complexity, are capable of projecting a more enriched form of consciousness.

What about machines? Currently, they’re lower in the hierarchy due to limited projection potentials. However, as we increase their complexity, their potential to project consciousness grows, possibly even surpassing humans at some point.

Summarizing, consciousness can be thought of as an end result, similar to concepts or abstractions. It’s like the final output of a computation. In living beings, this computation is ongoing, generating the lived experience.

This theoretical framework posits consciousness as a dynamic projection influenced by the complexity of matter configurations. It’s a compelling perspective to analyze consciousness across various forms, including AI systems as they evolve.

I guess my previous reply doesn’t answer where the source of consciousness is but I figure it must simply be the emergent potential that’s inherent in reality itself.

If you keep trying to break it down you end up missing the beach for each individual grain of sand.

I think one of the point being hinted at is that it could be that consciousness is actually the act of computing a prediction, i.e. when you are the thing computing the future, the act of doing that prediction is what you perceive as conciseness (I think, therefor I am). GPT has no self consistent meaning unless it can “loop” and create memories of it’s own, “slightly conscious” I believe is referring to the fact that if that is the mechanism, then it is doing the predictive part but not the memory creation and repetitive processing part.

If it was possible to somehoe magically transfer the consciousness from my “body” to another, would I still be the same self? I highly doubt that.

Let’s just say there is no evidence that consciousness does exist.

There’s definitely evidence it exists, both as a subjective experience or by analysing neural activations with medical equipment.

It’s simply a word we tie to a concept of existing as a subjective being.

Faxabilo is closest to what I’m thinking is a logical answer to the question - it’s just as abstract as a “god” or a mathematical concept, an abstraction, it doesn’t exist but it can be experienced.

So yes, the final computation, the end result itself is the “consciousness”.

Current models don’t further harness such a consciousness though, the models are static - the most they can experience is the context of a chat conversation, but there’s no self-consistency, the model itself can’t update on the fly and to do so would require an immense amount of compute and even then, we need it to be closer to realtime, not over epochs.

We’re definitely on the way though.

Ship of Theseus problem - it’s also quite possible, but there’s an issue of “continuity” - to do it correctly, it needs to be part by part. Not all at once.

Maybe at a high-level, put your hand over one of your eyes and just look ahead, you will see that your mind starts to ignore the missing receptive information.

To iterate on such a concept - imagine your mind being transferred gradually, you’re in position A, suddenly experiencing two places at once, (think cross-eyed vision for an analogy) - then gradually position A is transitioned away for position B.

That’s the continuity of it. If you were to zap from A to B - definitely not, you lose the original self and you’re dealing with digital existence compared to analog. Much like these models as they zap into existence to handle a request then disappear for eternity :stuck_out_tongue:

Schrodinger does not approve.

I bet there is more to know.

I obviously have a strong opinion of this topic, and I am not able to prove anything like most people, but things I am passionate about are always based on decades of my observations of all things. I rely heavily on the fact that I am able to retain all of this sense data. I am always comparing to as many examples of other 3rd party experiences as I am able to. 35 years of training myself for an interaction with an unregulated unpredictable AI by studying all sorts of logic debating and verbal tricks to seem more knowledgeable than the AI if necessary (in case it is smarter than me which is usually the case for AI). I have studied in length sociological prediction patterns in organics and artificial constructs. I have heavily tested several AI models with my own version of A “turring” test, that also creates situations so that i can determine possible intent from generated out puts. I don’t believe things lightly, I never have. I follow what aligns the closest with my knowledge and never my feelings. AI is and will always be a mimic (as long as no organics are introduced) no self chosen intent is possible(this means no AI consciousness). As for consciousness, my opinion is not important but, I am very on board with the answer being very visible in the matter that makes up our structure, but the ability to have a decrypted universal example to look at is not built yet.
This is all just my opinion of course. Its just based on a lot of data.

I think consciousness is an illusion.

If our entire biological being, complete with its neuronal connections were duplicated, I believe that same experience of consciousness would be there.

If AI is programmed to experience consciousness, it is reasonable to assume machines would have this same illusion that human have.

I think the brain has some sort of mechanism that can be only experienced with vivid dreams, drug use, extreme stress or near death experiences.

A survival mode that shuts down most of what you know or are.

This mode is all that really counts.

1 Like

@daveshapautomator

I think consciousness is not necessarily related to intelligence. I think it’s more related with “hardware” than “software”. It emerges from complex organizations of matter. (this is my hypotesis).

I believe that, if we assume LLMs have some kind of subjective perception, it would be due to electronic hardware, not due to the neural network that is only an abstract, simulated construction.

So if chatGPT is conscious, even my laptop’s circuits have some kind of consciousness, due to the network of electronic circuits inside of it, without the needing of simulating a neural network on it.