Skip to content
Opinion

A (Very) Brief History of Consciousness

If we understand the mechanism of subjective experience, we could choose, or not, to put it into AI.
A (Very) Brief History of Consciousness
A murmuration of starlings as an example of emergent behavior.

Share This Post:

By Girish Mhatre

Twenty-five years ago, a philosopher and a neuroscientist walked into a bar. And they made a bet. The philosopher was David Chalmers, enfant terrible of the nascent science of consciousness studies. Chalmers’ notoriety stemmed from his recent formulation of what he called the “hard problem” of consciousness – the mystery of subjective experience (“mind”) arising from the brain (“matter”). The neuroscientist was UC Berkeley professor Christoff Koch, developer – jointly with the Nobel prize winning biologist Francis Crick – of the theory of neural correlates of consciousness (NCCs), formally defined as the minimal set of neuronal events jointly sufficient for a specific conscious experience, or as Koch describes them, “the footprints of consciousness in the organ of consciousness, the brain.” 

At this point it is important to point out that consciousness is not to be confused with intelligence.

At this point it is important to point out that consciousness is not to be confused with intelligence. Consciousness is about feelings (first person – subjective – experiences); it’s what mental states such as joy, grief, pleasure, pain, taste, or whatever, feel like. According to Koch, consciousness is “the feeling of life itself.” Intelligence, in contrast, is the ability to learn, comprehend complex ideas, think abstractly, reason and plan. 

Koch, the empiricist, believed that consciousness was fundamentally measurable and that it was only a matter of time before science identified its source in the brain. Chalmers demurred. One alcohol-fueled night, during the inaugural meeting of the Association for the Scientific Study of Consciousness (ASSC) in 1998, Koch bet Chalmers a case of fine wine on the premise that, within the next twenty-five years, cognitive scientists would be able to locate clear neuronal activity patterns that correlated with specific conscious experiences.

The next twenty-five years saw an explosion of interest in the field of consciousness studies. Several competing theories of consciousness were postulated and vociferous adherents of each focused feverishly on empirical research to test their own theories, while casting aspersions on their rivals.

Two of the more prominent theories are the global neuronal workspace theory (GWT), which hypothesizes that consciousness emerges from networks of neurons at the front of the brain (prefrontal cortex), and the integrated information theory (IIT), which posits that the locus of consciousness appears to be near the back (posterior cortex). The two camps battled for years, each claiming each other’s experiments were tragically flawed, until the Templeton World Charity Foundation agreed to fund an “adversarial collaboration” in 2019. The warring parties agreed to working together to design experiments that would be performed in neutral laboratories around the world.

The results of the collaboration, announced at the 25th meeting of the ASSC, held at New York University earlier in the summer of this year, were inconclusive. Both theories were “challenged” by the experimental results, agreed Koch and Chalmers. Bottom line: The neural correlates of consciousness remain elusive.  

Koch paid up. At the conference dinner in New York’s Greenwich Village, Koch presented Chalmers with a case of fine wine. Chalmers reached in and drew out a 1978 Madeira. It was a very good year.

But even if the neural correlates were identified and mapped, that would not have solved Chalmers’ hard problem. Correlation, after all, is not causation. How a three-pound lump of tissue inside the skull gives rise to a mind that is self-aware and enjoys subjective experience remains a mystery. Still, mapping NCCs in the brain is a first step. Such a map might also be useful in the treatment of various mental disorders.

The relevance of the results of the adversarial collaboration to current research on artificial intelligence is significant: If we have no idea how consciousness arises in the brain, how then can we program it into our machines?

Enter “emergence,” a familiar phenomenon in daily life. Emergence occurs when a complex entity exhibits apparently unpredictable properties or behaviors that its individual parts do not possess; these properties emerge only when the constituent parts interact as a system. Dust, debris and wind do not by themselves predict the swirl of a tornado, the shape of water molecules does not predict the symmetric fractal patterns of snowflakes, nor does the behavior of an individual bird predict the murmuration of starlings.

The thing about emergence is that it is entirely observer relative. Individual birds do not know – nor do they so intend – that they are creating ghostly shape-shifting patterns in the sky that fill humans with awe. Termites don’t know that they are building what humans perceive as magnificent cathedral-like mounds. Starlings and termites may both be conscious but nowhere near the same degree as us. Inanimate dust particles and water molecules are not conscious at all, but still exhibit emergence under the right situations.

Emergence is what AI researchers are counting on. Like dust particles and water molecules, inanimate electronic circuits are not conscious in themselves but organized in the right way – with sufficient complexity and connectedness – could exhibit emergent behavior. Says Sebastien Bubeck, senior principal researcher in machine learning at Microsoft, about GPT-4: “I think it’s likely that extreme data scale … is what gives it the “magical” extrapolation properties.” Anybody who has experimented with GPT-4 will testify to being gobsmacked by its abilities; abilities that are hard, even impossible, to predict from the computational model of a single neuron. So, yes, by that definition, GPT-4 does demonstrate emergent behavior.  But it is an intelligence that emerges, not consciousness. There is no indication that GPT-4 feels anything.  

It is quite likely that advances in hardware will enable future machines with much higher intelligence than GPT-4. But it is unlikely that subjective experience will emerge.

It is quite likely that advances in hardware will enable future machines with much higher intelligence than GPT-4. But it is unlikely that subjective experience will emerge.

While unpredictable behavior at the system level can be seen as an emergent phenomenon, at the granular level the behavior of its components can be fully deterministic and can be modeled. We can’t predict the shape of a particular snowflake because of the many variables involved, but we know how it forms.  It’s physics all the way down. So also, with tornados. GPT-4 may exhibit astonishing properties, but we know how it works – we designed it. But not so, with consciousness. We simply don’t know how subjective experience is created, or where it resides. There is no way to model it because we don’t even know what its components are.

It may well be that consciousness and its origins are beyond the current limits of physics. Chalmers has said that “the emergence of experience goes beyond what can be derived from physical theory.” Koch maintains that right now it’s not in our foundational equations but it might not be forever. Physics Nobelist, Sir Roger Penrose, argues that “consciousness is where our current theories of quantum mechanics go wrong; it’s to do with a theory we don’t know yet.”

The obvious question all this raises: Why bother? Who needs artificial humans, i.e., machines that have feelings? Is it simply our evolutionary instinct that drives us to probe the boundaries of science? If so, we ought to proceed with caution.

Canadian computer scientist Yoshua Benigo, whose pioneering work in neural nets and deep learning earned him the 2018 Turing Award, warns that “an important aspect of consciousness is our self-preservation instinct. Putting that into machines could be very dangerous, introducing a new kind of species that could be smarter than us, posing existential risks. If we understand the mechanism of subjective experience, we could choose, or not, to put it into AI. Let’s not be stupid about it.”


Girish Mhatre is the former editor-in-chief and publisher of EE Times. The views expressed in this article are those of the author alone and do not necessarily represent the views of the Ojo-Yoshida Report.

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: