A Few Thoughts on Watson on Jeopardy

Earlier this week, Watson, an artificial intelligence program developed by IBM, competed against Ken Jennings and Brad Rutter on Jeopardy!. Seeing as this was the closest thing to my nerd roots being in pop culture, I avidly watched all 3 days of games, the NOVA episode on it, and attended a viewing party on-campus hosted by Stanford research and IBM, including a visitor from Almaden. In case you haven’t heard the result, the human players were pummeled by Watson, though it was fun to watch nonetheless. My thoughts on it aren’t entirely coherent, but here are a few tidbits I have on it:

  • Watson was impressive but not that impressive. Notably, I didn’t get any insight into what it’s actually doing from the NOVA special, the speaker from IBM, or the Jeopardy! episodes. My intuition is that the bulk of the power here is having a much larger dataset and greater computing power than most systems before. I don’t doubt that there are optimizations and clever insights into making it perform well, but I haven’t heard of any large departures from known techniques
  • Watson was very good at hitting the Daily Doubles. Part of it, of course, was that it was requesting most of the clues, but I’m not sure whether there is a known distribution of Daily Double locations. I would presume so from an explanation in these ars technica article, though that seems somewhat strange
  • In the 2nd article up there, the creators propose that Watson doesn’t have a speed advantage in buzzing in. I think it’s very clear from watching the game that Watson totally did. Consider this situation: you’re watching a stopwatch and want to stop it as close to 10 seconds as possible. How accurate do you think you’re going to get? Okay, I just tried with my watch, and I did pretty well (10.01, 10.00, 10.01, 10, 9.98), but even so, I’m still willing to bet that a computer could be faster than me. Notable is that last one, since the rules of Jeopardy! go that if you buzz in early, you get locked out for a few seconds
  • Having read a little more on the game, the way buzzing works is that a light comes on after Alex finishes the question, and you buzz as quickly as possible. How well you buzz is critical to the game. Looking at the expression of the other players, it’s clear that they knew many answers as well: they just couldn’t buzz quickly enough, and that’s true between human players as well. I think I read somewhere that Ken Jennings insisted that other players have more time to practice on the buzzers because he just got so good at it over time. And it’s recommended that players at home practice with a clicky-pen to get the timing right
  • My favorite moment in all of the games was when Ken buzzed in with wrong answer, then Watson buzzed in with the same (wrong) answer. Alex, in that manner that makes him seem like a complete jerk, said, “No. Ken said that.” Priceless
  • I really enjoyed watching the NOVA special since the topic is close to what I know, and I realized how much of a gloss their content is when you have a sense of what’s going on. Technically, a lot of things said aren’t wrong, but it still feels misleading and doesn’t get at the interesting subtleties of what’s actually going on. The visualizations are also pretty hilarious, such as floating equations of greek symbols representing code
  • The human players were fun to watch. Brad is strangely expressive with his eyebrows and head movements that don’t obviously correlate with what he’s saying, but he’s enthusiastic and fun to listen to. Ken is just momentarily very visibly affected by things. For example, he seems crushed in Double Jeopardy in the final game when Watson hits the Daily Double that he presumably wanted
  • Watson’s betting is strange, though I’ll assume it’s well thought out. This made me realize that Jeopardy is very much about playing the game well. A lot of people know a lot of answers, but the choice of clues and betting and reaction times and pacing are what really makes someone a winner
  • Watson in general was able to compute fast enough to respond, but on a couple of questions, it seemed as though it wasn’t fast enough, especially on very short clues. But that might just be me imagining structure on the game just on my intuitions
  • Watson apparently learned about categories based on the correct answers of other questions in that category. If the players knew this, that would also seem to encourage them to try out high dollar amounts and depend on their ability to actually understand the semantic structure of the questions before Watson really understood what was going on. I think this might be conventional play anyways but is something of a strategic choice to make

Given all of these points, know that I’m still impressed with Watson. I just don’t think that the most obvious interpretation of the game (that AI has taken huge strides) is really indicative of what’s really going on here. I’ll admit that I also drew the parallel to Deep Blue and got excited about this as a big challenge for AI, but there’s definitely a context for understanding what Watson did. And that, in my opinion, makes this whole series a fun, silly, impressive, but not significant or surprising publicity event for Jeopardy! and IBM.

Leave a Reply

Your email address will not be published.