Friday, February 11, 2011

The Chinese Room

This past Wednesday, I watched a NOVA episode about the upcoming Jeopardy! show series involving Watson, the supercomputer contestant. (And do I contradict myself after writing a snobby semi-tirade against popular science media presentation, and then watching NOVA? Eh, so what.)

(Three Jeopardy! episodes involving Watson as a contestant are scheduled for Feb. 14-16th, 2011).

The usual questions are of course, how can Watson play Jeopardy! Well, watch the NOVA episode. I'd rather talk about the Chinese Room.

I used to visit and occasionally contribute to a internet forum which had a participant who, had there not already been one, should have been the patron saint of lost causes. If there was a silly position to take on a subject, this guy would be a staunch proponent. If memory serves, he was an advocate for astrology, Ayn Rand and Objectivism, libertarianism, Zionism,  his own variation of Rupert Sheldrake's morphogenesis, and all sorts of other strange things. He wasn't a complete loon, but...

And I seem to recall that he had a peculiar argumentative style not unlike the cowardly and slippery method of  Socrates, in that he would never actually assert anything, but rather answer questions with questions. (I say cowardly because Socrates never actually took a stand on anything). In any case, he actually did make the assertion that computers would never be intelligent like people because a guy named John Searle provided an ironclad proof that computers could mimic human behavior but never understand it. This argument is called The Chinese Room.

John Searle in the Chinese Room 
You can either read the link, or, well, here's the short version. A kind of Rube Goldberg computer or an automaton is set up in an empty room (a black box, if you will) wherein a Chinese person can hold a conversation with it. There is a library of data and a rule book that allows any sentence or question in the Chinese language to be processed and answered. So a Chinese person can talk to the Chinese Room, and after a sufficient conversation, become convinced that he is talking to another Chinese person within the room. In other words, the computer's behavior is indistinguishable from a person's and the computer passes what is called a Turing Test, and so must be considered an intelligent being.

But here is Searle's catch. The library and rule book are not in Chinese. They could be, since this is a computer, all zeroes and ones. But for the purposes of Searle's argument, they are in English. They are in English so that Searle can take the place of the computer or automaton. He gets the Chinese statement as input, he follows the rules and accesses the data in English, to respond in Chinese as output. Searle fools the Chinese interlocutor that he understands Chinese, when in fact, Searle never understands what he is saying, just like the computer. In short, the computer does not understand.

Well, so what? You say. And justly so. Isn't this just another one of those stupid philosophical arguments? Doesn't this just show that the Turing Test doesn't cut the mustard? Doesn't this illuminate a point that anyone past the mental age of six can figure out? That appearances are deceiving? That the stuffed animal puppet is really not a real person.

Well, yes and no. It does turn out that the argument is stupid, and it all hinges on Searle's implication that just because he doesn't understand Chinese, therefore the automaton or computer doesn't either. You see, there have many objections to his argument, but the strongest is that, even though the whole process is mechanical and completely syntax driven, that no semantics, no meanings are derived out of the process. In short, the program may not understand, but the Room (automaton or mechanism or what have you, plus program) clearly does. Searle's counterpoint to that is, fine, I will memorize the whole library of data and rule book (which is all in English), and now the Room is completely inside his head. Now he can speak directly to the Chinese interlocutor, process everything in English in his head and understand not one word of Chinese.

But all this does in further illustrate the weakness of his argument. By simulating the Room inside his head, he expect us to believe that he is the Room. In other words, his mental simulations exactly duplicate the physical states of the Chinese Room. But, he is hoisted upon his own petard. His mental simulation is clearly not the same. It is a simulation. Therefore, the fact that he John Searle does not understand Chinese is not the same statement as the Chinese Room doesn't understand Chinese.

Well. okay. So what? Well, the point was that the patron saint of lost causes, invoking Searle, felt that computers could never, ever really be intelligent. Can't  they?

Watson clearly isn't. But Watson isn't just doing what the Chinese Room is doing (which is basically, the way old digital computers and their now primitive 1970s architectures did things). Watson uses an incredibly powerful technique called machine learning. Machine learning is learning by example, by trial and error. This is far more powerful than mere following of rules, because the rules (if any exist) are changeable by more and more examples. Watson can change his behavior just like people (most people).

Does this mean Watson and computers like him (it) might eventually get smart? Why not?

2 comments:

  1. So I ran into this blog post because I decided to see what I'd get if I googled "Watson Chinese Room"; congratulations you were the first entry.

    You seem a bit rough on Searle - he wasn't trying to show we could never have computers as smart as us, but that the traditional Turing machine approach wouldn't work. The problem of symbol manipulating computers lacking semantic content is still open today, but it's known as the "Symbol-Grounding Problem" (coined by Joseph Harnad in an excellent and simple-to-read paper of the same name) or in a slightly different form "The Frame Problem".

    The problem is that although Watson can answer the questions correctly, he doesn't actually know what the things he's using words for are, and so he has trouble determining what is and isn't relevant to them. Sure syntactic machines like Watson can reproduce the same behaviors as semantic engines like humans, but the former are notoriously inefficient -- I've been told Watson is the size of a small warehouse, and it cost a million dollars, and yet Ken Jennings, whose brain weighs a couple of lbs, can almost keep up with him. And it's not because Ken Jennings runs all the same algorithms just faster, it's because he knows which algorithms are relevant to the given topic, because the symbols his brain manipulates have semantic content.

    I don't know much about Watson's development - I'm still reading about it - but it doesn't seem like he has the perceptual embodiment or needs to ground symbols. At the bottom of it he's still an algorithmic Turing machine, so he'll never be as efficient as a human (in terms of energy consumption, processing power, etc.) Of course, he may still be able to take over the world.

    Anyway, there's two cents from a random guy on the Internet. Hope I don't sound as nutty as your Randian Zionist friend.

    ReplyDelete
  2. First! Eee-yay! Well, Omniclast, compared to what I have to say about neocons, Searles got off lightly. Although it certainly is true what you say that (as far as we can tell) Watson is merely manipulating symbols, which is what Searle insisted that ALL computers do. But since his argument was made basically about a 1970s von-Neuman style architecture, he really should have limited the argument to that type. Instead, he assumes that all architectures can do no more than that. This strikes me as utter hubris, or, Clarke's First Law: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right; when he states that something is impossible, he is probably wrong".

    And yes, you are right about efficiencies. It will be some time before Watson can fit in the volume of a human brain, perhaps as long as two-five years.

    Thanks for your input.

    ReplyDelete