IBM Watson Logo from IBM
Well, Watson beat (NPR) human its human competitors on Jeopardy, but I think the challenge Ken Jennings gave to IBM today sums up the one-trick pony aspect of Watson: Let put it on Dancing with the Stars to see what it can really do.
Watson could not see nor hear, and it could not write, let alone dance. Watson was trained, in this instance, precisely for winning at Jeopardy, like Deep Blue was built to win at chess. Deep Blue could not play Jeopardy, and although Watson could probably name chess moves and players, but it cannot play chess.
Watson is an interesting advance, but it is not a universal one. Some of Watson’s wrong answers were indeed very poor choices. Like the researchers working on Darpa’s Urban Challenge (and its previous desert version), little tweaks are made to algorithms and sensors to optimize the device for the terrain or knowledge required to meet the challenge. It is a very different approach than human learning. Mostly, these machines have very limited ability to learn independently, and when they do learn, the domain is very constrained. Watson could have improved its performance and be even more impressive, but even more impressive at Jeopardy and not much else.
Watson may well become the grandfather of a great answer machine in the sky, providing facts and figures faster than Google, but it still won’t be able to tell me the best source of knowledge about something or help me figure out what I really want to ask it. Those skills are entirely different from Watson’s current algorithms, as are voice understanding and penmanship.
NPR ran a good interview today. Hear today’s All Things Considered conversation with Ken Jennings and Watson’s principal investigator, IBM’s David Ferrucci, here.
Leave a Reply