Peter Norvig, Google's director of research, and Eric Horvitz, a distinguished scientist at Microsoft Research, recently spoke jointly to an audience at the Computer History Museum in Palo Alto, California, about the promise of AI. Afterward, the pair talked with Technology Review's IT editor, Tom Simonite, about what AI can do today, and what they think it'll be capable of tomorrow. Artificial intelligence is a complex subject, and some answers have been edited for brevity.
Technology Review: You both spoke on stage of how AI has been advanced in recent years through the use of machine-learning techniques that take in large volumes of data and figure out things like how to translate text or transcribe speech. What about the areas we want AI to help where there isn't lots of data to learn from?
Peter Norvig: What we're doing is like looking under the lamppost for your dropped keys because the light is there. We did really well with text and speech because there's lots of data in the wild. Parsing
Eric Horvitz: I've often thought that if you had a cloud service in the sky that recorded every speech request and what happened next–every conversation in every taxi in Beijing, for example–it could be possible to have AI learn how to do everything.
More seriously, if we can find ways to capture lots of data in a way that preserves privacy, we could make that possible.
Isn't it difficult to use machine learning if the training data isn't already labeled and explained, to give the AI a "truth" to get started from?
But a lack of labels is a challenge. One solution is to actually pay people a small amount to help out a system with data it can't understand, by doing microtasks like labeling images or other small things. I think using human computation to augment AI is a really rich area.
Another possibility is to build systems that understand the value of information, meaning they can automatically compute what the next best question to ask is, or how to get the most value out of an additional tag or piece of information provided by a human.
Norvig: You don't have to tell a learning system everything. There's a type of learning called reinforcement learning where you just give a reward or punishment at the end of a task. For example, you lost a game of checkers and aren't told where you went wrong and have to learn what to do to get the reward next time.
Click here to read the full article
(Source: www.technologyreview.com)