(Three Jeopardy! episodes involving Watson as a contestant are scheduled for Feb. 14-16th, 2011).
The usual questions are of course, how can Watson play Jeopardy! Well, watch the NOVA episode. I'd rather talk about the Chinese Room.
I used to visit and occasionally contribute to a internet forum which had a participant who, had there not already been one, should have been the patron saint of lost causes. If there was a silly position to take on a subject, this guy would be a staunch proponent. If memory serves, he was an advocate for astrology, Ayn Rand and Objectivism, libertarianism, Zionism, his own variation of Rupert Sheldrake's morphogenesis, and all sorts of other strange things. He wasn't a complete loon, but...
And I seem to recall that he had a peculiar argumentative style not unlike the cowardly and slippery method of Socrates, in that he would never actually assert anything, but rather answer questions with questions. (I say cowardly because Socrates never actually took a stand on anything). In any case, he actually did make the assertion that computers would never be intelligent like people because a guy named John Searle provided an ironclad proof that computers could mimic human behavior but never understand it. This argument is called The Chinese Room.
|John Searle in the Chinese Room|
But here is Searle's catch. The library and rule book are not in Chinese. They could be, since this is a computer, all zeroes and ones. But for the purposes of Searle's argument, they are in English. They are in English so that Searle can take the place of the computer or automaton. He gets the Chinese statement as input, he follows the rules and accesses the data in English, to respond in Chinese as output. Searle fools the Chinese interlocutor that he understands Chinese, when in fact, Searle never understands what he is saying, just like the computer. In short, the computer does not understand.
Well, so what? You say. And justly so. Isn't this just another one of those stupid philosophical arguments? Doesn't this just show that the Turing Test doesn't cut the mustard? Doesn't this illuminate a point that anyone past the mental age of six can figure out? That appearances are deceiving? That the stuffed animal puppet is really not a real person.
Well, yes and no. It does turn out that the argument is stupid, and it all hinges on Searle's implication that just because he doesn't understand Chinese, therefore the automaton or computer doesn't either. You see, there have many objections to his argument, but the strongest is that, even though the whole process is mechanical and completely syntax driven, that no semantics, no meanings are derived out of the process. In short, the program may not understand, but the Room (automaton or mechanism or what have you, plus program) clearly does. Searle's counterpoint to that is, fine, I will memorize the whole library of data and rule book (which is all in English), and now the Room is completely inside his head. Now he can speak directly to the Chinese interlocutor, process everything in English in his head and understand not one word of Chinese.
But all this does in further illustrate the weakness of his argument. By simulating the Room inside his head, he expect us to believe that he is the Room. In other words, his mental simulations exactly duplicate the physical states of the Chinese Room. But, he is hoisted upon his own petard. His mental simulation is clearly not the same. It is a simulation. Therefore, the fact that he John Searle does not understand Chinese is not the same statement as the Chinese Room doesn't understand Chinese.
Well. okay. So what? Well, the point was that the patron saint of lost causes, invoking Searle, felt that computers could never, ever really be intelligent. Can't they?
Watson clearly isn't. But Watson isn't just doing what the Chinese Room is doing (which is basically, the way old digital computers and their now primitive 1970s architectures did things). Watson uses an incredibly powerful technique called machine learning. Machine learning is learning by example, by trial and error. This is far more powerful than mere following of rules, because the rules (if any exist) are changeable by more and more examples. Watson can change his behavior just like people (most people).
Does this mean Watson and computers like him (it) might eventually get smart? Why not?