Q&A

Why Stereo Systems Won’t Turn into the Death Star

An interview with Oren Etzioni, one of the world’s leading experts on artificial intelligence, who gives his perspective on the promise and peril of the technology

Uri Pasovsky 17:5721.06.18
Oren Etzioni has been the chief executive officer of the Allen Institute for Artificial Intelligence in Seattle since it was founded in 2014. This ambitious organization, whose goal is “to contribute to humanity through high-impact AI research and engineering,” was founded with funding from billionaire Paul Allen, one of the founders of Microsoft. In February, Mr. Allen committed to investing $125 million more in the center.

 

For daily updates, subscribe to our newsletter by clicking here.

 

Mr. Etzioni, 54, is the son of Holocaust survivors. His father, Amitai Etzioni, is a renowned sociologist who once served as an advisor to Carter administration. Oren Etzioni was born in the U.S. and spent a few of his childhood years in Israel. He studied computer science at Harvard and earned a Ph.D. at Carnegie Mellon and became an expert in artificial intelligence, publishing prolifically and winning awards for his foundational research. He coined the widely used term “machine reading.” For 20 years he taught and carried out research at the University of Washington. Meanwhile, he also founded several ventures based on artificial intelligence and Big Data, including startups that were sold to Microsoft and eBay.

 

Oren Etzioni. Photo: Amit Sha'al Oren Etzioni. Photo: Amit Sha'al
Mr. Etzioni visits Israel regularly. Last month he attended a conference at the Hebrew University of Jerusalem where Calcalist caught up with him for an interview.

 

Q: How is the research institute connected to your entrepreneurship?

 

A: The Allen Center is a nonprofit, but I think of it as an entrepreneurial institution. We try to move fast, do new things, set clear targets and measure ourselves against them. We have a business incubator and there are companies that were founded as spin-offs of the center.

 

Q: How did the connection with Paul Allen happen?

 

A: In Seattle, he is huge: he has a football team a soccer team, real estate, museums, a music festival. He funds research including at the University of Washington. He decided to start a center for AI, reached out to candidates, including myself and soon it was clear this is an opportunity of a lifetime. We saw eye to eye on what we want to do.

 

Q: And what is it that you wanted to do?

 

A: I had an academic career but I wanted to have more impact and he didn’t to want establish another department at a university. He wanted to make a big impact in the area of AI, especially in semantic technology, which refers to how people understand things. AI systems can make limited black and white distinctions. Understanding is more difficult. Allen asked me at first, “Is it possible to give an artificial intelligence a reference book to read and then ask it questions?” It is presumably a simple activity, but the answer was no. We have been working on it and there’s progress but it is still a difficult problem.

 

Q: What is the right way to think about AI anyway?

 

A: People use all kinds of metaphors: monsters like the Golem of Prague or the Terminator, or an invention like dynamite that was developed for use in mines and was then appropriated for its destructive power to our regret. On the other hand, there’s one of my colleagues, Professor Andrew Ng, who says that artificial intelligence is like the new electricity, both in terms of the magnitude of the change and in the ease of use. You want a current? All you have to do is plug in and the power comes out and life is great. In reality, it’s not so easy. The technology is complicated to operate. So my metaphor is that AI is simply like a software program: technology that can be used for good and for bad and that is indeed bringing about deep changs. AI can be thought of as the next stage in the evolutions of software and coding languages.

 

Q: You wrote in the past that we should be neither too optimistic nor too pessimistic about AI.

 

A: The problem with too much pessimism is clear: the world is facing major problems like inequality and climate change. It’s true that AI can make them worse but it can also solve them. Seeing AI as a disaster is a great material for Hollywood but it's not reality. On the other end, many of my colleagues are total optimists. But they are also wrong. The question is how to use this powerful technology. In the end, these will be decisions for policymakers and politicians and voters to make. So instead of extremist approaches, let’s look at AI in a rational and evidence-based manner.

 

Q: Okay, so without being overly optimistic or pessimistic: Where are we going in ten years?

 

A: The best way to think ten years ahead is to look ten years back. During this time, in the micro, things changed like we have moved past the iPhone 3. But on the macro scale, not much has changed. In ten years, we are still going to be building AI systems that are narrow, that can play Go, for example, and win. Maybe they will also recognize faces and diagnose certain diseases. AI will be able to carry out those tasks in a superhuman way. But wider capabilities, the ones we think of as intelligence, such as understanding a situation or context, will be much harder to achieve. In 1996, the computer system Deep Blue beat Garry Kasparov in a game of chess. It can play the best chess game in the world, all the while the room is on fire, and not notice a thing. Today we have a program that can be the world champion of Go, which is a much more complicated game, while the room is on fire.

 

Q: Meaning that AI still cannot tell what is happening around it.

 

A: Yes. There has been no progress in its ability to understand what is happening around it. I expect that ten years from now, maybe there will be a program that beat the best Minecraft player in the world but it still won’t notice that the room is on fire. That’s where it needs us. That is why we need to aim for intelligence that enhances human capabilities, that works in tandem with people.

 

Q: And yet, there is a difference. Deep Blue had to be programmed with the rules of chess. With Go, the program learned the game on its own in five hours.

 

A: It’s true. And there have significant advances over these past twenty years. But now we hear that DeepMind (an AI company) has entered the medical industry and we might be tempted to think, “Okay, in five hours, we’ll solve problems of medicine. We’ll solve cancer. Go is such as a complicated game, it’s nothing for DeepMind.” But the answer is no. It’s not like that. There’s a paradox that people tend to miss: things that are difficult for people are easy for machines and things that are difficult for machines are easy for people. The real world, real people, real speech, books—these are a lot harder than Go.

 

Q: You are working on AI that can use common sense, that understands natural language, that understands questions in science and that answer them. Do you now have some concern that one day you’ll discover that the systems in your lab have developed AI that can think for itself?

 

A: It’s a little like the old stereo systems with all the buttons and equalizers that had to be manually controlled in order to get the right sound. Now we have systems that do so automatically. They find the bass and the treble on their own. Fearing AI is a little like worrying that automaton will transform the stereo system in the Death Star from Star Wars. It just doesn’t work that way. Whoever has written code knows how difficult it is to get code to do anything at all. And still, it’s a legitimate question. The more we develop coding language and more sophisticated software, they do indeed examine more options, learning along the way. But I am dying for the software that will show little initiative, do something interesting. Instead, I see Murphy’s law of AI: every single thing a program can do wrong, it will do wrong.

 

Q: What areas are the most promising for AI?

 

A: My main focus is on using this technology to save lives. There are two areas where this is most concrete. Cars are one. 40 thousand Americans die each year in road accidents and millions are injured. The estimates are that we can reduce those numbers by 80 percent with the help of cars based on AI. The other area is healthcare. Medicine is improving all the time. Life expectancy is going up and yet the leading cause of death at U.S. hospitals—and I assume it's the same in Israel—is human error. Doctors get tired, confused, and they have a hard time keeping track of all the information. Sometimes they don’t have access to your full medical history. And so they kill you. Not on purpose, of course. We could prevent a lot of errors, and provide better treatment, better diagnosis, better follow up—all with the help fo better AI. What makes me wake up in the morning and continue to research AI after 30 years is the questions of how to use this technology to save lives.

 

Q: Meanwhile AI is being criticized for how it has been abused. For example, there’s the way Russia used Facebook to influence the U.S. election. Is there any kind of self-reflection around that in the industry?

 

A: I am not familiar with everything that is happening, of course. And there are things we think about more now. The role of AI in reproducing discrimination or in the invasion of privacy. But overall the reckoning is limited in scope. Companies like Google, Facebook and Amazon practically print money. They use AI in a focused way to become more effective and they are content with it. The criticism mostly comes from outside.

 

Q: So the responsibility actually belongs to the regulators? Do they need to intervene?

 

A: It’s not a simple question. The research advances so quickly and regulations change slowly. But I think that regulators have the job of overseeing specific applications of AI: cars and weapons, for example. Systems that can save or take away lives.

 

Q: What about the demand that algorithms be more transparent that we know how they make decisions?

 

A: Most of the discussion is way off. Do we have transparency today? Politicians aren’t transparent and with judges, we also don’t know how they make their decisions. There are lots of studies that show that as people we craft narratives that retroactively come to explain how we think we acted in the past. Let’s not ask AI to meet an impossible standard. The way people think about AI is similar to how some think about Israel. They judge harshly, unrealistically. Of course, Israel and AI must aim for higher standards. But those need to be realistic, and the relevant question is how transparency you want from these systems compares to what you get from human beings. I think that forcing transparency through regulation leads to giving lip service to transparency, not real transparency.

Cancel Send
    To all comments