something to think about

August 24, 2007
Philosophy of the Moment
Why is it nice to think [that human qualities such as creativity, intuition, consciousness, esthetic or moral judgment, courage or even the ability to be intimidated by Deep Blue are beyond machines in the very long run]? Why isn't it just as nice--or nicer--to think that we human beings might succeed in designing and building brain-children that are even more wonderful than our biologically begotten children? The match between Kasparov and Deep Blue didn't settle any great metaphysical issue, but it certainly exposed the weakness in some widespread opinions. Many people still cling, white-knuckled, to a brittle vision of our minds as mysterious immaterial souls, or--just as romantic--as the products of brains composed of wonder tissue engaged in irreducible non-computational (perhaps alchemical?) processes. They often seem to think that if our brains were in fact just protein machines, we couldn't be responsible, lovable, valuable persons.

Finding that conclusion attractive doesn't show a deep understanding of responsibility, love, and value; it shows a shallow appreciation of the powers of machines with trillions of moving parts.
Dennett is right to point out that saying "Deep Blue wasn't really playing chess, just running algorithms" is bunk. I also buy the idea that Kasparov is doing a similar search (albeit "chunked" differently, with more familiarity with patterns on a more macro scale) and that neither Deep Blue nor Kasparov are that "conscious" of their analytical process as it is happening. But even though the popular culture has an irritating tendency to keep raising the bar of what "Artificial Intelligence" is as soon as the AI researches come up with an approach that beats it, I think Dennett is a bit misleading in painting a symmetry in the training the two "thinking machines" have received:
Much of this analytical work had been done for Deep Blue by its designers, but Kasparov had likewise benefited from hundreds of thousands of person-years of chess exploration transmitted to him by players, coaches, and books.
It's quite reasonable to admire the human as a (for now) unique general purpose learning machine over a one trick pony like a chess-playing computer. Kasparov could probably learn to play a mean game of backgammon in short order, which is more than could be expected of Deep Blue. And even though I think the world champion of Backgammon is yet another computer program, if an entirely new strategy game were to be invented, Kasparov would again have the upper hand in picking it up.

There's an analogy to be made with flight, I think: humans playing chess are the Wright Brothers, learning how to fly. A computer that has been coded to play chess is akin to a bird, shaped by millenniums of evolution that it knows nothing of.

I don't think the difference is permanent: over the decades, we should learn to make better general-purpose learners, and their have been some interesting approaches to building a learner from the bottom up, like Cog and Cyc. Of course, as soon as we build a computer that can design its own smart sequel, we'll hit that Singularity Vinge and Kurzweil are on about.

(That singularity idea is fascinating, as it makes some of those corny old scifi "the computers are out to get us!" clichés a little more plausible, in much the same way I never would have expected a Star-Trekian "the computer is processing so furiously that it's draining the power from the lights!" to be echoed in the battery life of my laptop doing processor-intensive tasks.)

I liked the idea of Fischer Random Chess that Dennett mentions, although it seems a little less amazing to realize it only represents 960 different possible starting positions. (Still, in theory, that would be a 3-orders-of-magnitude increase in the "book") My intuition is that such a game would favor the way computers run through possibilities over the way human grandmasters do it, but maybe that comes from a shallow knowledge of simplistic "look ahead" algorithms.