Advertisment

Google and Microsoft talk Artificial Intelligence

author-image
CIOL Bureau
Updated On
New Update

CALIFORNIA, USA: Google and Microsoft don't share a stage often, being increasingly fierce competitors in areas such as Web search, mobile, and cloud computing. But the rivals can agree on some things like the importance of artificial intelligence to the future of technology.

Advertisment

Peter Norvig, Google's director of research, and Eric Horvitz, a distinguished scientist at Microsoft Research, recently spoke jointly to an audience at the Computer History Museum in Palo Alto, California, about the promise of AI. Afterward, the pair talked with Technology Review's IT editor, Tom Simonite, about what AI can do today, and what they think it'll be capable of tomorrow. Artificial intelligence is a complex subject, and some answers have been edited for brevity.

Technology Review: You both spoke on stage of how AI has been advanced in recent years through the use of machine-learning techniques that take in large volumes of data and figure out things like how to translate text or transcribe speech. What about the areas we want AI to help where there isn't lots of data to learn from?

Peter Norvig: What we're doing is like looking under the lamppost for your dropped keys because the light is there. We did really well with text and speech because there's lots of data in the wild. Parsing never naturally occurs, perhaps in someone's linguistics homework, so we have to learn that without data. One of my colleagues is trying to get around that by looking at which parts of online text have been made links–that can signal where a particular part of a sentence is.

Advertisment

Eric Horvitz: I've often thought that if you had a cloud service in the sky that recorded every speech request and what happened next–every conversation in every taxi in Beijing, for example–it could be possible to have AI learn how to do everything.

More seriously, if we can find ways to capture lots of data in a way that preserves privacy, we could make that possible.

Isn't it difficult to use machine learning if the training data isn't already labeled and explained, to give the AI a "truth" to get started from?

Advertisment

Horvitz: You don't need it to be completely labeled. An area known as semi-supervised learning is showing us that even if 1 percent or less of the data is tagged, you can use that to understand the rest.

But a lack of labels is a challenge. One solution is to actually pay people a small amount to help out a system with data it can't understand, by doing microtasks like labeling images or other small things. I think using human computation to augment AI is a really rich area.

Another possibility is to build systems that understand the value of information, meaning they can automatically compute what the next best question to ask is, or how to get the most value out of an additional tag or piece of information provided by a human.

Advertisment

Norvig: You don't have to tell a learning system everything. There's a type of learning called reinforcement learning where you just give a reward or punishment at the end of a task. For example, you lost a game of checkers and aren't told where you went wrong and have to learn what to do to get the reward next time.

Click here to read the full article

(Source: www.technologyreview.com)

tech-news