Computing power steadily increases, and computer architecture becomes more complex and layered. Neuroscience reveals details of brain function, inspiring new designs for artificial processing. And so “intelligent” machines prove increasingly capable of marvelous performances—apparently blurring the line between human and machine.
Will the line be erased? The project of Artificial Intelligence can be understood as the attempt to close the gap between human and machine. Whether it will succeed is not yet clear, but the A.I. project has momentum. Let us suppose it does succeed. What then?
I imagine various reactions. Some would be confused, wondering if A.I. has rights and dignity. Others will be offended, or even scared, that humans would lose our special place in creation. Still others will feel triumphant, proud to demonstrate that human beings were never all that special—just complicated machines, and ones perhaps not all that worthy of admiration: upgradable, replaceable.
But the success of the A.I. project wouldn’t mean any of those things. It would be a wondrous practical, technical achievement that would not, in itself, give us any deeper access to truths we care most about.
If we develop machines that can exhibit behavior as if they were intelligent, will we thereby have learned more about what human intelligence is? Will we have learned that a human being just is a machine, and nothing more than a machine? Could any advanced android bring us any closer to understanding human nature, human purpose, and human destiny?
These are questions about theoretical matters, and ones of the utmost relevance—the truth about ourselves. The technical project of AI, however successful it may be, is a practical challenge, nothing more: can we engineer something to mimic or simulate behavior associated with human intelligence? The success of that practical project doesn’t answer, and may not even touch on, questions about the truth of human nature.
The famous Turing test confirms this. It tells us that, as far as the engineering project is concerned, we can consider as intelligent anything capable of making us believe it is intelligent. As a success-criterion for the AI project this makes sense: we will know the project has succeed when it can pass itself off as what we recognize as “intelligent.”
But the Turing test is not a criterion of truth for questions about human nature. Indeed, the whole point of the Turing test is to set such questions aside as irrelevant to the practical challenge of imitating human behavior. If we do imitate human behavior with machines, that does not mean the machine is human, nor that intelligence just is whatever the A.I. is doing.
Simply put, we do not gain insight into the nature of human intelligence by simulating some of its behavior. Indeed, the more we blur the practical line between human and machine behavior, the more inescapable and pressing will be the enduring, and captivating, theoretical questions—questions of philosophy and theology, and questions of psychology and biology—about what distinguishes human beings from machines.