Why the Robot Apocalypse will not happen any time soon

It all begins with good intentions. Engineers create cybernetic help for everything from cleaning house to  fighting wars. Or someone decides we need a master computer to make the world run, you know, a little better. In any case, the result is nearly always catastrophic. Skynet takes over the planet (Terminator). A corporate mainframe initiates the dreaded robot rebellion (I, Robot).

Morality

What are all these movies trying to tell us? On a positive note, they are warning us against the dangers of de-humanization. We cannot think that our future – our salvation, if you will – depends on ever smarter labor-saving devices.

Wall-E gives us a family-friendly version of how this reliance on machines can all go terribly wrong. The robot heroes in this film have endearing human qualities. In the end, however, their role is not to accommodate man in his indolence but to help him literally get on his feet again. While I would never claim that Wall-E is a Christian allegory, its themes are consistent with the Bible’s emphasis on personal accountability: we are designed to work, and we are moral beings. Both of these principles are explored in Genesis 2 and 3.

A third principle is found in Genesis 1: man may make, but he does not create. Here I mean to focus on our moral limitations. Making an artificial sentient being, even if such a thing were possible, introduces added responsibilities that belong uniquely to a loving Creator God. If we craft autonomous machines that work just like us, can we make them a little lower than the angels, just like us (Psalm 8)? And if so, are we prepared to treat them as moral beings accountable to God, just like us?

The Nature of the Mind

On a negative note, many of these movies assume that machine consciousness is inevitable. The leap from cat-chasing Roombas to people-chasing Terminators is only an upgrade away. An emotion chip for Data puts him on a par with his human shipmates (Star Trek: The Next Generation). Childhood memories make the Nexus-6 androids almost indistinguishable from their human creators (Blade Runner). Add enough complexity to the mix, it seems, and the laws of the universe will do the rest. Matter and evolution are all we need.

We buy human-level artificial intelligence on the big screen because we have been conditioned to believe that it is, in large part, already here. Optical character recognition (OCR) programs have long touted their AI capabilities. If we scratch below the surface, however, we find a very narrow set of reading skills. The program will guess that an “l”-shaped character surrounded by letters is more likely to be the letter “l” than the numeral “1.” The results are amazing, but there are no glowing red eyes or ominous metallic voices to suggest that world domination is just around the corner.

AI has failed to live up to the hype. In 1950, Alan Turing proposed a test to see whether computers could pass for human. Some attempts succeeded, but only by reacting with questions based on the user’s input. We have yet to see a computer answer questions in a Turing test and sound convincing at the same time.

John Searle pushes the argument even further. He imagines a situation in which a program could pass the Turing test and still not qualify as conscious. In the famous Chinese Room Experiment, someone sits in an isolated room and receives a message written in Chinese. After a few minutes, the man produces a convincing response. It has all the trappings of a real conversation, except that the guy in the room knows nothing about Chinese. He has a stack of symbols (a database), and is simply arranging those symbols according to a certain set of rules (the program). The point is this: shuffling symbols (or bits) is not the same as understanding what those symbols really mean. This is true for the man behind the wall, and it is true for all the examples of AI we have seen so far.

The Origin of the Mind

Technology marches on, but the robot apocalypse will need a lot more than raw computational power to get going. Alan Winfield, an expert in robotics, develops a Paleyesque analogy to make this point. Consider a man who wants to build a cathedral. He can have all the materials and labor at hand, but “he also needs the design.”* If a robot mind needs a designer, then what does this say about the human mind?

Like so many fields of science today, AI researchers are chronically hobbled by their infatuation with materialism. They cannot answer basic questions about morality, the nature of the mind, and the origin of that mind.

[A version of this article appeared in Think, May 2010, p. 7.]


* Alan Winfield, “On wild predictions of human-level AI,” February 15, 2006. http://alanwinfield.blogspot.com/2006/02/on-wild-predictions-of-human-level-ai.html.

© 2010 – 2011, Trevor Major. All rights reserved.

Printer friendly version Printer friendly version


Comments are closed.