Thoughts on OpenAI Figuring Out Dota 2

In 1996, Deep Blue played competitive Chess against world champion Garry Kasparov. In 2011, Watson beat Ken Jennings at Jeopardy. In 2016, AlphaGo played and beat Lee Sedol in Go. And in 2018, OpenAI Five played 2 games against professional Data 2 players, and the humans survived 2 very fun games. But in another year, I think that OpenAI Five will handily beat the best human Dota 2 teams.

(Disclaimer: I am neither a Dota 2 nor a Machine Learning expert, and I didn’t do much research for this blog post before diving into writing) Continue reading “Thoughts on OpenAI Figuring Out Dota 2”

Why We Don’t Need to Worry About Robots’ Rights

Last Thursday, I went to a panel discussion being held at the Stanford Law School by The Center for Internet and Society on “Legal Perspectives in Artificial Intelligence.” My mind is mostly buried in the AI, but since I have recently become more interested in policy in general and the social impact of technology, I thought it could be interesting to see where the crossover is.

There were a lot of possible intersections, such as the use of AI in assisting lawyers put together cases or IP rights to AI code and programs. The topic they mostly discussed, however, was the possibility of AI being considered a legal person and what the implication of that was. It was an unfortunate angle to take because AI equal to a human doesn’t exist, so it was mostly non-answers, roughly of the form, “Interesting; we’ll see what happens.” They also chose not to jump into the philosophical aspects too much, with only minor discussion of philosophical zombies (a being that behaves exactly like a human but has no consciousness), and instead left those as largely open questions as well.

Disregarding how unsatisfying those answers were, I was also disappointed by the conversation as a whole because I found their conception of AI somewhat narrow, and that limited the topics they could consider. Instead of considering the state of technology as it is today and the issues surrounding that, they mostly clung to the more fantastic view of AI. This view, perpetuated largely by popular media, is best represented by robots like C-3PO that are human in all except form. More generally, this view treats AI as a system with intentions, self-motivation, and more psychological properties similar to humans. And that AI doesn’t exist.

Stepping back from that, however, and we already have some forms of AI, and I will make the stronger claim that what we have now will be the form of AI for the foreseeable future (with respect to legal rights; of course we’re making great progress in the nitty-gritty). So, I think that this panel was appropriate for us now, but for different reasons than what the organizers likely considered. AI is here now and it has plenty of problems surrounding it. For better or worse, though, I think it’s largely invisible in our lives. Let me give a few examples of AI in our lives, what its role is, why I don’t see it changing into HAL, and what the legal implications of it are.

First, AI in the market. The panelists discussed the legal status of AI as a trustee, an advisor to a trustee, or as a business operator. I don’t see this coming soon because AI don’t have their own desires, so it doesn’t make sense for them to be in charge. AI can be a tool to make recommendations and crunch the numbers, but the last mile will be all blood and guts. And this form of AI already exists. Take algorithmic trading: a computer is executing trades for a fund or some other trader based only on the numbers and often faster than humans are capable of. On the whole, it’s a black box. Very smart physicists and computer scientists can build models to make it run, but once it’s going, it’s past our ability to actively monitor it. Just last year, the Dow Jones crashed, which was largely blamed on algorithmic trading. The SEC ended up changing some rules based on this, so this is a problem being dealt with right now. I haven’t followed the situation, but I imagine that there are questions about liability when AI runs havoc on the market.

Second, health care. This came up in the discussion of Watson, the Jeopardy playing AI that IBM claims it wants to retool for health care. They were concerned about the possible issues here, as health as least as touchy of a topic as the market. I’m not scared about it, though, because we still have doctors. Doctors may receive advice from computers, but the final decision is going to be in the hands of a human. We don’t send our brightest to school for a decade just to let them defer judgement: they’ll still sit between a patient and AI. Even so, this is again already happening today. In fact, we apparently even have a journal dedicated to this topic. As it is, we can use probabilistic models to diagnose various illnesses by telling a computer what the various symptoms are, and it’ll spit back the likelihood of various possibilities. AI researchers will tell you that they’re actually better than doctors since they have the accumulated knowledge of many more cases than any 1 doctor could ever know. Importantly, AI here is just a tool, not a legal person. We do have questions today, such as patient privacy when the data are being aggregated into a single machine, and these will be the questions moving forward.

To wrap this up, let’s bring this around to an example of AI that you must be familiar with to be reading this: web search. On the surface, it seems like this is a task that humans are performing, any non-trivial search engine you’ll encounter has all sorts of interesting AI in it, such as trying to figure out if you meant the scooters, the mice, the hygiene product, or the phone when you type in, “Why isn’t my razor working?” The net result is that people usually click on the first link, which means that we’ve already deferred a lot of our choices to AI in picking the “best match” to our search terms. But that’s a far cry from R2D2, and hopefully, no one will ever sue a search engine for giving them bad results.

And it’s everywhere else, too. Google translate, autonomous cars, Bing flight search, Amazon search recommendations, and Siri are all examples of what AI really looks like today, and frankly,  it’s not that scary. None of it may sound that impressive or very AI-like, but that tends to be a funny problem with AI as a field of research: once we figure out how to do it, it’s not AI anymore.

I think it’s important that all of those things I just rattled off are tools, not independent agents. We build things that we want, and for the most part, we want things done for us while leaving us in control. This means that we build wonderful systems that use AI to make our lives easier, but that last mile is still human.

Given that this is what AI is and what it will be (so I claim), then the issues are already in front of us now. And if they don’t seem like issues, it’s because they aren’t. Do we worry about incorrect diagnoses from AI? A doctor may blame a computer, but it’s still the doctor’s call. What about an autonomous car getting into an accident? Assuming it’s entirely autonomous, it’s no different than trams that have a preset schedule. Cars aren’t going to have their own desires (such as to tailgate the jerk who cut them off), and since we’ll understand how they work, the mystery is gone.

So in summary, AI is here now, and it’s as it will be. There are legal issues to consider with respect to AI, but we shouldn’t be worrying about AI as a legal person. And appreciate and understand how important AI already is in your lives. As tools.

A Few Thoughts on Watson on Jeopardy

Earlier this week, Watson, an artificial intelligence program developed by IBM, competed against Ken Jennings and Brad Rutter on Jeopardy!. Seeing as this was the closest thing to my nerd roots being in pop culture, I avidly watched all 3 days of games, the NOVA episode on it, and attended a viewing party on-campus hosted by Stanford research and IBM, including a visitor from Almaden. In case you haven’t heard the result, the human players were pummeled by Watson, though it was fun to watch nonetheless. My thoughts on it aren’t entirely coherent, but here are a few tidbits I have on it:

  • Watson was impressive but not that impressive. Notably, I didn’t get any insight into what it’s actually doing from the NOVA special, the speaker from IBM, or the Jeopardy! episodes. My intuition is that the bulk of the power here is having a much larger dataset and greater computing power than most systems before. I don’t doubt that there are optimizations and clever insights into making it perform well, but I haven’t heard of any large departures from known techniques
  • Watson was very good at hitting the Daily Doubles. Part of it, of course, was that it was requesting most of the clues, but I’m not sure whether there is a known distribution of Daily Double locations. I would presume so from an explanation in these ars technica article, though that seems somewhat strange
  • In the 2nd article up there, the creators propose that Watson doesn’t have a speed advantage in buzzing in. I think it’s very clear from watching the game that Watson totally did. Consider this situation: you’re watching a stopwatch and want to stop it as close to 10 seconds as possible. How accurate do you think you’re going to get? Okay, I just tried with my watch, and I did pretty well (10.01, 10.00, 10.01, 10, 9.98), but even so, I’m still willing to bet that a computer could be faster than me. Notable is that last one, since the rules of Jeopardy! go that if you buzz in early, you get locked out for a few seconds
  • Having read a little more on the game, the way buzzing works is that a light comes on after Alex finishes the question, and you buzz as quickly as possible. How well you buzz is critical to the game. Looking at the expression of the other players, it’s clear that they knew many answers as well: they just couldn’t buzz quickly enough, and that’s true between human players as well. I think I read somewhere that Ken Jennings insisted that other players have more time to practice on the buzzers because he just got so good at it over time. And it’s recommended that players at home practice with a clicky-pen to get the timing right
  • My favorite moment in all of the games was when Ken buzzed in with wrong answer, then Watson buzzed in with the same (wrong) answer. Alex, in that manner that makes him seem like a complete jerk, said, “No. Ken said that.” Priceless
  • I really enjoyed watching the NOVA special since the topic is close to what I know, and I realized how much of a gloss their content is when you have a sense of what’s going on. Technically, a lot of things said aren’t wrong, but it still feels misleading and doesn’t get at the interesting subtleties of what’s actually going on. The visualizations are also pretty hilarious, such as floating equations of greek symbols representing code
  • The human players were fun to watch. Brad is strangely expressive with his eyebrows and head movements that don’t obviously correlate with what he’s saying, but he’s enthusiastic and fun to listen to. Ken is just momentarily very visibly affected by things. For example, he seems crushed in Double Jeopardy in the final game when Watson hits the Daily Double that he presumably wanted
  • Watson’s betting is strange, though I’ll assume it’s well thought out. This made me realize that Jeopardy is very much about playing the game well. A lot of people know a lot of answers, but the choice of clues and betting and reaction times and pacing are what really makes someone a winner
  • Watson in general was able to compute fast enough to respond, but on a couple of questions, it seemed as though it wasn’t fast enough, especially on very short clues. But that might just be me imagining structure on the game just on my intuitions
  • Watson apparently learned about categories based on the correct answers of other questions in that category. If the players knew this, that would also seem to encourage them to try out high dollar amounts and depend on their ability to actually understand the semantic structure of the questions before Watson really understood what was going on. I think this might be conventional play anyways but is something of a strategic choice to make

Given all of these points, know that I’m still impressed with Watson. I just don’t think that the most obvious interpretation of the game (that AI has taken huge strides) is really indicative of what’s really going on here. I’ll admit that I also drew the parallel to Deep Blue and got excited about this as a big challenge for AI, but there’s definitely a context for understanding what Watson did. And that, in my opinion, makes this whole series a fun, silly, impressive, but not significant or surprising publicity event for Jeopardy! and IBM.