View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
October 28, 2016updated 07 Nov 2016 2:29pm

From the Turing Test to Deep Learning: Artificial Intelligence Goes Mainstream

From AI ethics to the issues of trust and bias - Melanie Mitchell talks to CBR about the future of AI.

By Ellie Burns

This year, the Association for Computing Machinery (ACM) celebrates 50 years of the ACM Turing Award, the most prestigious technical award in the computing industry.  The Turing Award, generally regarded as the ‘Nobel Prize of computing’, is an annual prize awarded to “an individual selected for contributions of a technical nature made to the computing community”. In celebration of the 50 year milestone, renowned computer scientist Melanie Mitchell spoke to CBR’s Ellie Burns about artificial intelligence (AI) – the biggest breakthroughs, hurdles and myths surrounding the technology.

 

EB: What are the most important examples of Artificial Intelligence in mainstream society today?

melanie-mitchell

Melanie Mitchell is a Professor of computer science at Portland State University. An active member of ACM, Mitchell is the author of five books and more than 80 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the 10 best science books of 2009.

MM: There are many important examples of AI in the mainstream; some very visible, others blended in so well with other methods that the AI part is nearly invisible.  Web search is an “invisible” example that has had perhaps the broadest impact. Today’s web search algorithms, which power Google and other modern search engines, are imbued with AI methods such as text processing with neural networks, and searching large-scale knowledge representation graphs. But web search happens to quickly and seamlessly that most people are unaware of how much “AI” has gone into it.

Another example with large impact is speech recognition. With the recent ascent of deep neural networks, speech recognition has improved enough so that it can be easily used for transcribing speech, texting, video captioning, and many other applications.  It’s not perfect, but in many cases it works really well.

There are many other natural language AI applications that ordinary people use every day: email spam detection, language translation, automated news article generation, and automated grammar and writing critiques, among others.

Computer vision is also making an impact in day-to-day life, especially in the areas of face recognition (e.g, on Facebook or Google Photos), handwriting recognition, and image search (i.e., searching a database for a given image, or for images similar to an input image).

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

We’re all familiar with so-called “recommendation systems,” which advise us on which books, movies, or news stories we might like, based on what kinds of things we’ve already looked at, and on what other people “like us” have enjoyed.

Another sophisticated, but often invisible, application of AI is to navigation and route planning—for example, when Google Maps tells us very quickly the best route to take to a given destination. This is not at all a trivial problem, but, like web search, is available so easily and seamlessly that many people are unaware of the AI that has gone into it.

There are many more examples of AI impacting our daily lives, in medicine, finance, robotics, and other fields.  I’ll mention one more possibly “invisible” area:  targeting advertising. Companies are using massive amounts of data and advanced machine learning methods to figure out what ads to show you, and when, and where, and how.    This one application of AI has become a huge economic force, and indeed has employed a lot of very smart AI Ph.Ds. As one well-known young data scientist lamented, “The best minds of my generation are thinking about how to make people click ads.”

 

EB: What have been the biggest breakthroughs in Artificial Intelligence in recent years and what impact is it having in the real-world?

MM: The methods known as  “Deep Learning” or “Deep Networks” have been central to many of the applications I mentioned above. The breakthrough was not in inventing these methods—they’ve been around for decades. The breakthroughs rather were in getting them to work well, by using huge datasets for learning.  This was possible mainly due to faster computers and new parallel computing techniques. But it’s been surprising (at least to me) how far AI can get with this “big data” approach.

The impact in the real world is both in the applications (such as speech recognition, face recognition, language translation, etc.) and also in the ascent of “data science” as a vital area in industry. Businesses have been doing what is called “data analytics” for a very long time, but now are taking this to a wholly new scale, and creating many new kinds of jobs for people who have skills in statistics and machine learning.

Another recent breakthrough is in the area of “reinforcement learning,” in which machines learn to perform a task by attempting to perform it and receiving positive or negative rewards. This is a kind of “active learning”—over time the machine performs various actions, occasionally gets “rewards” or “punishments”, and gradually figures out which chains of actions are likely to lead to rewards.

Like deep networks, reinforcement learning has been studied in the AI community since the 1960s, but recently it has been shown to work on some really impressive tasks, most notably Google’s AlphaGo system, which learned to play the game of Go—from scratch—and got to the point where it could beat some of the best human Go players.  There were a number of clever new methods that resulted in the effectiveness of reinforcement learning; in fact, one of them was to use deep networks to learn to evaluate possible actions to take.

Reinforcement learning methods are quite general—algorithms similar to those developed in the AlphaGo system have recently been used to significantly reduce energy use in Google’s data centers. I think we will be seeing some really interesting additional applications of reinforcement learning in the next few years.

 

 

EB: What are some of the major hurdles that Artificial Intelligence still needs to overcome in the next ten years?

MM: The biggest hurdles for AI are to deal with (1) abstract concepts; (2) common sense; and (3) learning without being explicitly “taught”.   I personally don’t think 10 years will be enough to get anywhere near “human-level” in these areas.

As for abstract concepts: We have AI that can recognize pictures of cats, but they don’t really know anything about cats, and their relationships to other concepts.   Think about the concept of a “cat fight”. Of course understanding the literal meaning of this requires a lot of knowledge about cats and their behavior and motivations. But humans are able to take this concept, abstract it, and apply it in new domains. We can recognize a “cat fight” between people, but also between companies, or nations, or television shows, or university departments. This is just one example of how a concept can acquire new meaning via abstraction and analogy. Learning and using concepts in this way is a hallmark of human cognition, and it is a hurdle that AI will need to overcome in order to reach that level of intelligence. This is closely related to an AI problem called “Transfer Learning”:  if a machine learns something in one domain, how can it transfer what it has learned to a related domain? To my mind, this is essentially the question of how to get computers to perform abstraction and analogy-making.

Now, onto common sense:  IBM’s Watson program, which famously beat expert humans on the game show Jeopardy, “knew” that Michael Phelps had won a particular swimming race by 1/100 of a second, but does it know whether or not he got wet in doing so?  Does it know if he got out of the pool after the race?  Does it know if he took off his socks before getting into the pool? There is so much “hidden knowledge” in human understanding that is lacking in computer “understanding.” Some Artificial Intelligence researchers have tried to solve this by creating enormous databases of “common sense knowledge,” but as yet these haven’t succeeded in producing machines with the kinds of background knowledge of the world that humans possess. Imbuing what we call “common sense” into computers is still a wide-open problem.

Finally, the most successful AI applications to date involve machine learning in which the machine learns from “labeled” examples, such as photos of cats that are labeled with the word “cat”. These applications often require millions of labeled examples to learn successfully. But of course humans can learn concepts with many fewer “labeled” examples, and often with no labeled examples. The area of AI called “unsupervised learning” addresses this, but to date it has not had the kinds of big successes seen with “supervised learning”. It’s a problem AI is going to have to solve to get anywhere near human-level performance on many tasks.

 

EB: How well prepared is the Artificial Intelligence community and society to deal with the deep ethical issues that come with using AI approaches in life-critical areas such as health and transportation?

MM: Right now, the community is not very well prepared. But these issues are getting a lot of attention and both Federal and State governments are making efforts to craft policy around these issues. These issues are very complex, but let me comment briefly on a few interesting aspects. First, AI systems are relying increasingly on machine learning, in which computers learn from large amount of data how to make decisions.   Often, the decisions are based on complicated statistical correlations learned by the computer that are hard for humans to understand.  This brings in the issue of trustbroken-trust2How do we know the machines are making decisions based on features we know to be important?  How do we know that the statistical correlations underlying the decisions are not based on accidental artifacts present in the data? There is now quite a bit of buzz in the AI community around the issue of “transparency” or “interpretability” of AI systems. The European Union has even created a “Right to Explanation,” in which a person is entitled to “meaningful information about the logic involved” in a computer’s decision affecting that person. This is still controversial and somewhat ill-defined, but I expect to hear a lot more about this kind of regulation in the near future.

Another issue is that of bias:  For a machine learning from data, if there is bias in the data, there will likely be bias in the decisions made by the machine. One hypothetical example: Suppose a cancer diagnosis system learns from a dataset in which the patients are predominantly male, but the physicians using the system aren’t aware of this possible bias? The system’s decisions might be correct for men but way off for women.   This is a simple hypothetical, but researchers have already detected implicit biases in some of the datasets used to train AI systems used in the real world.

There’s a lot of discussion around the topic of how “autonomous” machines should be allowed to be in making decisions. This is currently a huge issue for self-driving cars, and will remain a central issue as AI gets ever more sophisticated and widely used.     I expect “AI ethics” to become a major new sub-discipline of philosophy.

 

EB: Much has been made of the potential for Artificial Intelligence in pop culture. What are some of the biggest myths you’ve seen? Can you think of examples where science fiction is getting close to reality?

MM: One of the big myths is that “computers have passed the Turing Test.”  In fact, in all the publicized “Turing Tests,” in which judges have tried to guess which conversation is with a human and which with a computer, the conversation topics have been so restricted that the test is nothing like what Turing originally envisioned.

To be honest, I can’t think of examples where science fiction is getting close to reality.  But I’m not really a science fiction fan, so I can’t comment much on this.

alan-turing2

Statue of Alan Turing at the Bletchley Park Museum, poring over an Enigma machine.

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU