View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

MIT Algorithm Routs Out Adversarial Examples In Computer Vision Models

“Given one input image, we want to know if we can modify it in a way that it triggers an incorrect classification.”

By CBR Staff Writer

Massachusetts Institute of Technology (MIT) researchers have designed a scalable algorithm to help identify the robustness of neural networks used in computer vision software.

Convolutional neural networks (CNNs) are used to classify images that are assessed by devices operating some form of computer vision software. Driverless cars use CNNs to evaluated their visual input and draw an appropriate response, such as identifying a stop sign and then appropriately stopping within the correct location.

However, while the human eye can make out a stop sign that is dirty or has a few bullet holes in it, CNNs ability to identify a sign correctly can be drastically affected by something as simple as a section of darker pixels within the image.

When training CNNs researchers often test and introduce into the images what is known as ‘adversarial examples’. These can range from slight changes such as a black sticker added to a sign post or a more drastic change like the overlay of a second contrasting object.

While some small tests can be run on CNNs to test how robust the network is when it comes to correctly identifying an image, even if an adversarial example is presented, the issue of scalability has always been present as some current techniques are not easily scaled to more complex neural networks.

Researchers at MIT have developed a new test technique that uses 60,000 training images and 10,000 test images to introduce small image agitating factors such as pixel brightness and contrast. This system keeps adjusting the images fed into a CNN to find the point at which the network makes a false classification.

Adversarial Examples In Computer Vision

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

Adversarial Examples In Computer Vision Software

The paper’s first author Vincent Tjeng, a graduate student in the Computer Science and Artificial Intelligence Laboratory commented in a MIT post that: “Adversarial examples fool a neural network into making mistakes that a human wouldn’t. For a given input, we want to determine whether it is possible to introduce small perturbations that would cause a neural network to produce a drastically different output than it usually would.”

“In that way, we can evaluate how robust different neural networks are, finding at least one adversarial example similar to the input or guaranteeing that none exist for that input.”

Using their testing technique the researchers are able to identify the limits to which a particular CNN can be pushed. If for instance MIT’s algorithm is successful it will highlight that certain images are at risk of been distorted in the CNN’s eyes and more work needs to be done to mitigate any potential risk associated with identifying that particular image, or in the case of a car what sign it just processed.

However, if it the classifying section of the CNN identifies the image correctly, even after MIT’s algorithm has tweak and introduce new pixel brightness levels, then it can say that that no adversarial examples of that kind exist for that image.

“Given one input image, we want to know if we can modify it in a way that it triggers an incorrect classification.” “If we can’t, then we have a guarantee that we searched across the whole space of allowable modifications, and found that there is no perturbed version of the original image that is misclassified.” Tjeng commented.

See Also: Can Machine Learning Be “Productised” for Business Problem Owners?

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU