Categories
essay technology

Why We Don’t Need to Worry About Robots’ Rights

Last Thursday, I went to a panel discussion being held at the Stanford Law School by The Center for Internet and Society on “Legal Perspectives in Artificial Intelligence.” My mind is mostly buried in the AI, but since I have recently become more interested in policy in general and the social impact of technology, I thought it could be interesting to see where the crossover is.

There were a lot of possible intersections, such as the use of AI in assisting lawyers put together cases or IP rights to AI code and programs. The topic they mostly discussed, however, was the possibility of AI being considered a legal person and what the implication of that was. It was an unfortunate angle to take because AI equal to a human doesn’t exist, so it was mostly non-answers, roughly of the form, “Interesting; we’ll see what happens.” They also chose not to jump into the philosophical aspects too much, with only minor discussion of philosophical zombies (a being that behaves exactly like a human but has no consciousness), and instead left those as largely open questions as well.

Disregarding how unsatisfying those answers were, I was also disappointed by the conversation as a whole because I found their conception of AI somewhat narrow, and that limited the topics they could consider. Instead of considering the state of technology as it is today and the issues surrounding that, they mostly clung to the more fantastic view of AI. This view, perpetuated largely by popular media, is best represented by robots like C-3PO that are human in all except form. More generally, this view treats AI as a system with intentions, self-motivation, and more psychological properties similar to humans. And that AI doesn’t exist.

Stepping back from that, however, and we already have some forms of AI, and I will make the stronger claim that what we have now will be the form of AI for the foreseeable future (with respect to legal rights; of course we’re making great progress in the nitty-gritty). So, I think that this panel was appropriate for us now, but for different reasons than what the organizers likely considered. AI is here now and it has plenty of problems surrounding it. For better or worse, though, I think it’s largely invisible in our lives. Let me give a few examples of AI in our lives, what its role is, why I don’t see it changing into HAL, and what the legal implications of it are.

First, AI in the market. The panelists discussed the legal status of AI as a trustee, an advisor to a trustee, or as a business operator. I don’t see this coming soon because AI don’t have their own desires, so it doesn’t make sense for them to be in charge. AI can be a tool to make recommendations and crunch the numbers, but the last mile will be all blood and guts. And this form of AI already exists. Take algorithmic trading: a computer is executing trades for a fund or some other trader based only on the numbers and often faster than humans are capable of. On the whole, it’s a black box. Very smart physicists and computer scientists can build models to make it run, but once it’s going, it’s past our ability to actively monitor it. Just last year, the Dow Jones crashed, which was largely blamed on algorithmic trading. The SEC ended up changing some rules based on this, so this is a problem being dealt with right now. I haven’t followed the situation, but I imagine that there are questions about liability when AI runs havoc on the market.

Second, health care. This came up in the discussion of Watson, the Jeopardy playing AI that IBM claims it wants to retool for health care. They were concerned about the possible issues here, as health as least as touchy of a topic as the market. I’m not scared about it, though, because we still have doctors. Doctors may receive advice from computers, but the final decision is going to be in the hands of a human. We don’t send our brightest to school for a decade just to let them defer judgement: they’ll still sit between a patient and AI. Even so, this is again already happening today. In fact, we apparently even have a journal dedicated to this topic. As it is, we can use probabilistic models to diagnose various illnesses by telling a computer what the various symptoms are, and it’ll spit back the likelihood of various possibilities. AI researchers will tell you that they’re actually better than doctors since they have the accumulated knowledge of many more cases than any 1 doctor could ever know. Importantly, AI here is just a tool, not a legal person. We do have questions today, such as patient privacy when the data are being aggregated into a single machine, and these will be the questions moving forward.

To wrap this up, let’s bring this around to an example of AI that you must be familiar with to be reading this: web search. On the surface, it seems like this is a task that humans are performing, any non-trivial search engine you’ll encounter has all sorts of interesting AI in it, such as trying to figure out if you meant the scooters, the mice, the hygiene product, or the phone when you type in, “Why isn’t my razor working?” The net result is that people usually click on the first link, which means that we’ve already deferred a lot of our choices to AI in picking the “best match” to our search terms. But that’s a far cry from R2D2, and hopefully, no one will ever sue a search engine for giving them bad results.

And it’s everywhere else, too. Google translate, autonomous cars, Bing flight search, Amazon search recommendations, and Siri are all examples of what AI really looks like today, and frankly,  it’s not that scary. None of it may sound that impressive or very AI-like, but that tends to be a funny problem with AI as a field of research: once we figure out how to do it, it’s not AI anymore.

I think it’s important that all of those things I just rattled off are tools, not independent agents. We build things that we want, and for the most part, we want things done for us while leaving us in control. This means that we build wonderful systems that use AI to make our lives easier, but that last mile is still human.

Given that this is what AI is and what it will be (so I claim), then the issues are already in front of us now. And if they don’t seem like issues, it’s because they aren’t. Do we worry about incorrect diagnoses from AI? A doctor may blame a computer, but it’s still the doctor’s call. What about an autonomous car getting into an accident? Assuming it’s entirely autonomous, it’s no different than trams that have a preset schedule. Cars aren’t going to have their own desires (such as to tailgate the jerk who cut them off), and since we’ll understand how they work, the mystery is gone.

So in summary, AI is here now, and it’s as it will be. There are legal issues to consider with respect to AI, but we shouldn’t be worrying about AI as a legal person. And appreciate and understand how important AI already is in your lives. As tools.

Leave a Reply

Your email address will not be published. Required fields are marked *