03 October 2015

What's wrong with Searle's Chinese Room argument?

Searle has described his Chinese Room Argument (CRA) on many occasions. Here is a typical example:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. (Searle, 1999, ‘The Chinese Room’, in Wilson, R.A. and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences, Cambridge: MIT Press)
This argument is what we could call a nice intellectual magic trick, as Searle diverts the attention of the spectator from what’s important, the place where the “magic” really takes place, namely the “book of instructions for manipulating the symbols”, and points it to the unimportant attention-grabbing actor brought to the forefront, namely the “native English speaker who knows no Chinese”. 

The trick evaporates once you start asking questions about the “book”. How does it really work? Suppose for instance that the “book” is stored inside the head of a real native Chinese speaker who is conveniently hiding under the table. Then we obviously know why the room passes the test – there’s a guy there who understands Chinese (not the person who hands out the answers though). 

But now suppose the “book” is stored in a computer; let's say that the program Searle mentioned is a real computer program. According to the hypothesis, the room passes the test; the English speaker still doesn’t know Chinese (but as we've seen above, that's irrelevant for deciding whether someone in the room actually understands Chinese). I would say that this implies the computer program genuinely understands Chinese, given that it successfully replaces the Chinese guy under the table. Of course Searle wouldn’t want to grant us that, but his “argument” doesn’t work anymore because he would have to assume the very thing he wants to demonstrate – the fact that the computer program doesn't “really” understand Chinese.

Useful rule: Every time someone tells you a thought experiment remember this definition: A thought experiment is a method of generating the maximum amount of confusion with the least amount of words. The vast majority of thought experiments are language tricks based on some form of misdirection, relying on misfirings of our intuition.