This year, the Association for Computing Machinery (ACM) celebrates 50 years of the ACM Turing Award, the most prestigious technical award in the computing industry. The Turing Award, generally regarded as the ‘Nobel Prize of computing’, is an annual prize awarded to “an individual selected for contributions of a technical nature made to the computing community”. In celebration of the 50 year milestone, renowned computer scientist Melanie Mitchell spoke to CBR’s Ellie Burns about artificial intelligence (AI) – the biggest breakthroughs, hurdles and myths surrounding the technology.
EB: What are the most important examples of Artificial Intelligence in mainstream society today?
MM: There are many important examples of AI in the mainstream; some very visible, others blended in so well with other methods that the AI part is nearly invisible. Web search is an “invisible” example that has had perhaps the broadest impact. Today’s web search algorithms, which power Google and other modern search engines, are imbued with AI methods such as text processing with neural networks, and searching large-scale knowledge representation graphs. But web search happens to quickly and seamlessly that most people are unaware of how much “AI” has gone into it.
Another example with large impact is speech recognition. With the recent ascent of deep neural networks, speech recognition has improved enough so that it can be easily used for transcribing speech, texting, video captioning, and many other applications. It’s not perfect, but in many cases it works really well.
There are many other natural language AI applications that ordinary people use every day: email spam detection, language translation, automated news article generation, and automated grammar and writing critiques, among others.
Computer vision is also making an impact in day-to-day life, especially in the areas of face recognition (e.g, on Facebook or Google Photos), handwriting recognition, and image search (i.e., searching a database for a given image, or for images similar to an input image).
We’re all familiar with so-called “recommendation systems,” which advise us on which books, movies, or news stories we might like, based on what kinds of things we’ve already looked at, and on what other people “like us” have enjoyed.
Another sophisticated, but often invisible, application of AI is to navigation and route planning—for example, when Google Maps tells us very quickly the best route to take to a given destination. This is not at all a trivial problem, but, like web search, is available so easily and seamlessly that many people are unaware of the AI that has gone into it.
There are many more examples of AI impacting our daily lives, in medicine, finance, robotics, and other fields. I’ll mention one more possibly “invisible” area: targeting advertising. Companies are using massive amounts of data and advanced machine learning methods to figure out what ads to show you, and when, and where, and how. This one application of AI has become a huge economic force, and indeed has employed a lot of very smart AI Ph.Ds. As one well-known young data scientist lamented, “The best minds of my generation are thinking about how to make people click ads.”
EB: What have been the biggest breakthroughs in Artificial Intelligence in recent years and what impact is it having in the real-world?
MM: The methods known as “Deep Learning” or “Deep Networks” have been central to many of the applications I mentioned above. The breakthrough was not in inventing these methods—they’ve been around for decades. The breakthroughs rather were in getting them to work well, by using huge datasets for learning. This was possible mainly due to faster computers and new parallel computing techniques. But it’s been surprising (at least to me) how far AI can get with this “big data” approach.
The impact in the real world is both in the applications (such as speech recognition, face recognition, language translation, etc.) and also in the ascent of “data science” as a vital area in industry. Businesses have been doing what is called “data analytics” for a very long time, but now are taking this to a wholly new scale, and creating many new kinds of jobs for people who have skills in statistics and machine learning.
Another recent breakthrough is in the area of “reinforcement learning,” in which machines learn to perform a task by attempting to perform it and receiving positive or negative rewards. This is a kind of “active learning”—over time the machine performs various actions, occasionally gets “rewards” or “punishments”, and gradually figures out which chains of actions are likely to lead to rewards.
Like deep networks, reinforcement learning has been studied in the AI community since the 1960s, but recently it has been shown to work on some really impressive tasks, most notably Google’s AlphaGo system, which learned to play the game of Go—from scratch—and got to the point where it could beat some of the best human Go players. There were a number of clever new methods that resulted in the effectiveness of reinforcement learning; in fact, one of them was to use deep networks to learn to evaluate possible actions to take.
Reinforcement learning methods are quite general—algorithms similar to those developed in the AlphaGo system have recently been used to significantly reduce energy use in Google’s data centers. I think we will be seeing some really interesting additional applications of reinforcement learning in the next few years.
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.