July 23, 2008
So I've been twittering about Searle's Chinese Room lately, might as well ramble about it at more length and get it out of my system... (kisrael.com, come for the quotes and links, stay for the long pseudo-intellectual grumblefests!)
I'm reading Jeff Hawkins' "On Intelligence", and from what I've heard it seems pretty promising... with ideas that the core of intelligence is a memory-prediction system, and that AI researchers do themselves a disservice by not looking at the actual physical mechanisms of the brain, just like neuroscientists do themselves a disservice by not trying to take a step back and focus on the large process rather than specific subsystems. That all seems really promising.
So far he has two points I disagree with... one is that Searle's Chinese Room is a satisfactory demonstration that "behaviorial equivalence is not enough", that you could somehow fake intelligence without being intelligent. The second is this idea that intelligence is strictly an internal property. He might be not too far off on the second idea, but from a utilitarian standpoint, a 100% internal intelligence is of zero interest to us... one could imagine this group of hyperintelligent rocks, all with this rich internal state that is this lovely model of the whole environment, able to make simulations and predictions with stunning accuracy, but if there is zero interaction with the outside world, who cares? These smartrocks are indistinguishable from, you know, rocks! (I remember writing a poem about this in high school, a rock that figure out world peace and all that, but couldn't tell anyone 'cause it was a rock.) Down this path lies stuff like Greg Egan's "Permutation City", where a whole field of floating dust specks might be intelligent, if we just knew how to interpret /communicate with it, a kind of weird pantheism, or at least beleif in pan-intelligence.
So...the Chinese Room. You can read Hawkins restatement of the thought experiment here. He concludes that "no matter how cleverly a computer is designed to simulate intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent".
I find this conclusion absurd. First, while this is an abstract thought experiment and thus a huge amount of handwaving is permitted, it's important to note how hyper-complex the "big book of instructions and all the pencils and scratch paper he could ever need" would be if the setup is going to effectively simulate a person conversing intelligently in Chinese. It's an important thing to note, because part of Searle's argument is secretly an appeal to intuition, and lines like "after all, it's just a book, and books can't think!" will come up but that is terribly misleading because ignores the overwhelming scope of that book... it needs contains "simple" abstract symbol manipulations that can "fake" someone who has a deep knowledge of the world, Chinese culture, history, itself, the laws of cause and effect, a sense of humor, what it means to be in love -- in short, everything necessary to convince the person passing in the notes and reading the responses that there is a Chinese speaker inside there. That book would need to be almost unimaginably huge and complex to pull this off.
But say we grant the theoretical possibility of this book. There is a perfectly valid answer to "where does the understanding lie in this scenario?", a reply formulated shortly after the original idea was proposed, and it's called the "Systems Reply'... the man inside might not understand Chinese, and a static book and pile of scratch paper certainly doesn't understand Chinese, but the System as a whole... man, book, paper, room-- absolutely does. For me this is one of those ideas that I almost can't believe isn't intuitively and universally obvious.
Searle's response is to say, ok, well what if the man memorizes the book, and has a good enough memory to do all the steps in his head... There! He now can speak Chinese without knowing Chinese! (As I think Dennett points out, he now knows Chinese but in the "wrong way".) Going back to the idea of the room, I guess the idea is that because there are certain things the odd intelligence of room, man, book can't do, we're not counting it as "true intelligence". Oddly enough, for me this goes back to the idea of the hyperintelligent rocks, in that the issue is one of information getting in and out. Ask the Chinese Room about a beautiful grassy meadow, and it talks about the meadow. Searle seems to argue, though, that it doesn't really understand what a meadow is, it's just doing abstract symbol manipulation. But if enough is going on inside that you can ask it ongoing questions about the meadow, what it feels like, how the grass gently floats on the wind, etc, and are satisfied by the humanness of the answers, to say that there's no "real" understanding on all that scrap paper, or in that book, or with the diligent, boring work of that man is just being ornery, and terribly biased against ways of being intelligent that don't physically resemble our own brains. So just like the guy who 'internalized' the Chinese Room might not have access to his understanding of Chinese like someone who learned Chinese the usual way, we might not be able to comprehend the internal states of the physical Chinese Room, but I can't see there's any way of deeply faking understanding without having understanding.
(Someone on the Wikipedia page comments points out how, sadly, too often school can look like a big Chinese room, where a kid might be given a statement like "the heart is associated with the flow of blood', and later be given a question like 'what is the heart associated with the flow of? A. snot B. blood C. poop'... thus becoming a simple Chinese Room that can answer a basic question about biology by pattern recognition, with no true sense of meaning or depth.)
So I'm still optimistic about Hawkins books... he may be more concerned with the layman's understanding of computers, and arguing that an intelligent system will operate very little like the main part of a computer does. (Even if the end result was some kind of "brain simulation" that happens to run on a traditional-style computer, kind of a neuronic VR... I'm not far enough along to know if he would accept the plausibility of that or not.) Still, his begging the question of whether a Chinese Room would have understanding rankles me a great deal.
firefox spellcheck FAIL: temprement -> procurements, procurement, procurement's, premenstrual, excremental, Add to dictionary. Yeesh.