Skip to content
Truth & Consequences

Why Our Machines Will Never Be as Smart as We Are

Consciousness is an ineffable, primal awareness of our internal states. Computers will never have it.
AI singularity isn't happening consciousness

Share This Post:

By Girish Mhatre

What’s at stake?
Some foresee the advent of a maliciously motivated superintelligence. Don’t worry – it’s not going to happen. Strong AI is a myth.

The most recent update to an ongoing poll of computer scientists indicates that most of them believe that technological “singularity” – the emergence of artificial intelligence that exceeds natural human intelligence, with potentially disastrous effects – will be reached by the year 2060. Many, like famed American inventor Ray Kurzweil, now director of engineering at Google, believe it will happen much sooner.

Kurzweil’s 2005 book, The Singularity is Near, and Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, have stoked the popular imagination. Movies like Her and Ex Machina, the pervasiveness of conversational bots (Siri and her ilk) and regular reports of computers winning at chess and Go have given the impression that, for better or worse, Artificial General Intelligence, also known as Strong AI, is just around the corner.

Clearly, Strong AI could confer tremendous benefits on society, but a dark shadow looms. The late British physicist Stephen Hawking expressed his concern several years ago, suggesting that Strong AI could end humanity if deployed incautiously. Meanwhile, Elon Musk continues to warn that advancements in AI represent the biggest existential threat to humankind.

It’s not going to happen. A maliciously motivated superintelligence isn’t in the cards. Strong AI is simply not possible.

The reason is that computers can never acquire consciousness, a feature of our existence that we take completely for granted.

We are conscious beings because there is “something that it feels like” to be us. That feeling is termed “subjective experience.“ It’s a first-person point of view. Our subjective experiences such as hunger, love, fear, pain, jealousy, sexual arousal, etc. are ours alone, yet hard to describe simply in terms of the gross senses of sight, sound, taste, touch and smell. Consciousness is this ineffable, primal awareness of our internal states. More lyrically, it is the “feeling of life itself,” according to renowned consciousness researcher Christoff Koch, president and chief scientist of the Allen Institute for Brain Science.

Consciousness precedes intelligence
The reason consciousness is crucial is because it is, obviously, a precursor to intelligence, which is the ability to apply the awareness of our internal states to solve problems or to navigate our environment. Consciousness is what allows us to build a model of reality and to imagine alternates to it by asking “what if” type questions. Without the awareness of subjective experience there can be no models of reality and no intelligence, let alone superintelligence.

The question is, can  we program it into a machine, this “feeling of life itself?”

To replicate this awareness in a machine requires us first to understand it. But while being conscious may be the most familiar experience we have, consciousness itself is hard to define in scientific terms.

Physics has given us an understanding of the fundamental nature of the material world, but it is silent about consciousness. There is no physical mechanism that we can identify within the framework of either classical or quantum mechanics that can generate a subjective experience. If we can’t understand it, how can we imbue a machine with a mind like ours?

In his 1995 classic, “The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics,” Sir Roger Penrose, winner of the 2020 physics Nobel, argues that what goes on in the human mind – and in the minds of apes and dolphins, for that matter – is very different from the workings of any existing or imaginable computer. Penrose uses various thought experiments, including one involving Gödel’s Incompleteness Theorems, to show that consciousness is “non-computable” – that is, it cannot be represented by an algorithm. Since all computer programs are algorithms, consciousness cannot be programmed into a machine.

Proponents of Strong AI make the case that since the brain is a machine – constructed of “wetware” rather than hardware – that is conscious, then other machines with appropriate architecture should also exhibit consciousness. This is the physicalist position that consciousness is an emergent property – an “epiphenomenon” – of architectural complexity. In other words, as the system grows architecturally more complex, there comes a point and, boing! – it develops an awareness of its internal states. Consciousness, physicalists believe, is a fundamental property of complex networked entities.

iPhone vs. fruit fly
But where is the complexity threshold that triggers consciousness? My iPhone is an extremely complex networked entity. It probably has more transistors than a fruit fly has neurons. Yet I find it hard to believe that it has the consciousness of a fruit fly, or any consciousness at all. What about Deep Blue, IBM’s AI supercomputer that defeated world chess champion Garry Kasparov, or, more recently, Google’s AlphaGo AI, which beat world Go champion Ke Jie? Did either, or both, of those AIs exhibit Strong AI, i.e. consciousness?

As a conscious being, Kasparov was surely aware not only of the fact that he was playing a high-stakes match against a machine in a highly public setting, but also of the intricacies of the game. He had to be conscious that he was playing chess, and that he’d opted for his favorite opening –the Grunfeld defense.

Deep Blue, you can be sure, was conscious of none of these things because it was not conscious of anything at all. To a machine, codes and inputs are nothing more than items and sequences to execute, as a set of transitions in electronic states, that we can interpret computationally. The transitions in electronic states are observer independent but the resulting computation is observer relative because the programmer has assigned meaning to variables. The machine doesn’t care; to the machine there’s no meaning to this sequencing or execution activity. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

Deep Blue did not even know it was playing chess. Like all modern AIs, it is nothing more than a brute-force pattern-recognition system.

Appearance of comprehension
The Chinese Room thought experiment demonstrates how an “operational appearance of comprehension” can be created. It was proposed back in 1980 by John Searle, currently professor emeritus of the Philosophy of Mind and Language at the University of California, Berkeley.

The experiment imagines Searle, himself, alone in a room following the instructions of a computer program that responds to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program’s instructions, he sends appropriate strings of Chinese characters back out under the door. These character strings mean nothing to Searle, yet those outside the door mistakenly suppose there is a Chinese speaker in the room.

Searle, who coined the term Strong AI in 1992, argues that, without “understanding” we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have consciousness. Therefore, the Strong AI hypothesis is false.

“How much real, observer-independent intelligence do computers have, whether ‘intelligent’ or ‘superintelligent’? The answer is zero, absolutely nothing.”

John Searle

Says Searle, “How much real, observer-independent intelligence do computers have, whether ‘intelligent’ or ‘superintelligent’? The answer is zero, absolutely nothing. The intelligence is entirely observer relative. And what goes for intelligence goes for thinking, remembering, deciding, desiring, reasoning, motivation, learning, and information processing, not to mention playing chess and answering the factual questions posed on Jeopardy! In the observer-independent sense, the amount of knowledge that the computer possesses of each of these is zero. Commercial computers are complicated electronic circuits that we have designed for certain jobs. And while some of them do their jobs superbly, do not for a moment think that there is any psychological reality to them.”

So, should we stop worrying about the singularity? Searle’s answer is a resounding yes.

“It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight, But, if we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation be real. Without consciousness, there is no possibility that superintelligent computers intentionally set out on their own to destroy us, based on beliefs, desires and other motivations, because the machinery has no beliefs, desires, and motivations.”

There is, however, quite another possibility: The singularity is already here; conscious machines already exist. In fact, they are us. We, ourselves, are conscious computers, of flesh and blood, capable of belief, desires and motivations, slouching toward the apocalypse, validating the doomsday prophesized by the singularity.

Bottom line:
Without the awareness of subjective experience there can be no models of reality and no intelligence, let alone superintelligence. The singularity is NOT nigh.


Girish Mhatreis the former editor-in-chief and publisher of EE Times.

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: