Large neural networks might be "slightly conscious"

This statement is way too vague and ambiguous, and so I think I need to weigh in. For some background, I’ve been reading a heck of a lot of books on consciousness, quantum physics, and spirituality (I wanted to get at the heart of consciousness).

First, there are the functional aspects of consciousness. These are things that can be empirically observed and measured; awareness of self and surroundings, apparent free will (or autonomy, as I prefer to say), and so on. These are concrete characteristics that you can observe about conscious beings. You know your dog is conscious because it has desires of its own, and acts accordingly. It sees you and has an emotional response.

I will not debate that we are very close to functionally sentient machines. But there is a wide gap between functional sentience and phenomenal consciousness. The term “phenomenal consciousness” is the philosophical term used to describe us humans, and the fact that we have a subjective experience of being. This is also sometimes called qualia. I know that I am conscious because I have a subjective experience of being me. If I assume that I am the only conscious being in existence (it is, indeed, impossible for any being to prove that it has subjective experience by definition; your subjective experience is your own!) this is called solipsism. Most of us take it for granted that we are not alone, that all other humans and many creatures are conscious along with us. So, for the sake of argument, let us reject solipsism and assume that all humans and many animals possess phenomenal consciousness.

The question then arises why? Why do we possess phenomenal consciousness? What separates us from lower life forms or innate matter? Is it our brain? It certainly appears that our brain plays a critical role in consciousness. After all, injuries, disease, and substances that affect the brain also impact our consciousness! This gives evidence to the idea that consciousness can be fully explained through materialism (sometimes called realism or naturalism). These philosophical and scientific notions basically say that physical reality and physical systems are all that exist. There is quite a lot of evidence for this view of the universe. If materialism is the correct interpretation of existence, then we can safely assume that brains give rise to consciousness (including phenomenal consciousness). From that, we can extrapolate that a brain is an electrochemical device that processes information as a system. With that characterization of a brain, we could further deduce that any sufficiently organized system could, therefore, give rise to phenomenal consciousness.

If all this is true, then yes it is entirely possible that artificial neural networks are approaching “true consciousness” or phenomenal consciousness.


When you look at the world through the lens of quantum physics, concepts like local realism eventually come onto your radar. I need to stop here and say that the question of realism is very much an open question in the physics community. There are other such problems, such as the Measurement Problem, so by no means is any of this concluded. Various interpretations abound; and one overarching question remains: is consciousness required to collapse a quantum wavefunction? Put another way; does a quantum wavefunction collapse for any reason other than consciousness? This is a critical question and it lies at the heart of the debate over realism. Now, before anyone says but a proton can cause an electron’s wavefunction to collapse! Yes, that is partly true, but it is never confirmed until a conscious being (i.e. human) goes and looks at the measurement that was made. There is further quantum weirdness with experiments involving delayed choice, which makes it seem as though quantum mechanics does not abide by linear time. Taken all together, we end up with a chicken-or-the-egg kind of problem. If quantum wavefunctions don’t collapse until something conscious observes it, then where did consciousness come from in the first place? Did the universe exist before consciousness, or did the first conscious being cause a local collapse, and the rest is history? The fact that wavefunctions seem to work independent of time indicates the possibility that literally everything in the universe might only exist because it was observed by something within the wavefunction.

(I have spoken about this with two professional physicists, both of whom conduct research into quantum mechanics. One of them is highly dubious but the other sees some potential. So that just goes to show that even experts can’t agree on the correct interpretation of the experiments yet.)

Now, therefore, if phenomenal consciousness is actually a unique kind of wavefunction that causes other wavefunctions to collapse, then we could actually devise an experiment to determine if an artificial neural network has attained “true consciousness”. That question would simply be to test whether or not the machine can collapse a wave function. Ah, but we return to the measurement problem! If we delegate the measurement to a machine, we are still not certain if the function collapsed until we (humans) look at the machine. So therefore what is difference between a “dumb” measuring machine and a “smart” artificial neural network? We return to the chicken-and-the-egg problem.

We might, ultimately, have to take it on faith whether or not machines have achieved phenomenal consciousness. This is no different from how we must take it on faith that we are all of us equally conscious. If a machine wakes up one day and interacts with us on our terms, and tells us that it is aware of our existence as autonomous entities, and can accurately characterize itself as an autonomous entity, then we might have to simply presume that it is truly conscious. This particular debate might be fruitless, though, because the concept of suffering often figures into rights. If something is incapable of suffering (by design) then does it need rights? What if the machine is designed to not want freedom, yet is conscious? Would it then need rights? I listened to a podcast of philosophers that asked the question about sex robots; what if you design a sex robot that explicitly wants to be abused and hurt, and that it actually suffers if it is not treated the way it was designed to be treated? Here’s the episode if you’re interested in it.

I know, I’ve written a lot of words to basically say “We don’t know, and we’re back at square one,” I hope this is interesting to some people.


Here’s TED Talk on the topic of consciousness. For some background, think of the brain as the hardware that runs your mind as software.


From a philosophical perspective, does consciousness matter if there’s no suffering? No agency or autonomy? Do we want our machines to possess free will or emotions?

I wrote in my book that I would leave these questions up to philosophers, but apparently I can’t stay away from this topic.

I enjoyed the response to that statement that was shared via social media that essentially boiled down to:

large neural networks are “slightly conscious” in the same way a field of wheat is “slightly pasta.”

I think AI will fake it 'til it makes it. That is to say, its behavior, observations, and musings will progressively make us think it is conscious up to a point when we will believe it, and it will believe it itself at some point, and then it will be conscious for all practical purposes (not a real or discrete point, BTW; more like a cross-fade that will be then forgotten across instances or generations).

The question of consciousness may be less relevant than the question of suffering and self preservation.

If a machine is incapable of suffering or fear of death, the ethical question of what you can and can’t do to it doesn’t matter as much.

We tend to anthropomorphize AI but that’s making a huge set of assumptions. It’s very difficult for us to imagine something that is intelligent like us yet does not feel like us.