Researchers from the University of Washington and the Allen Institute for Artificial Intelligence in Seattle have created the first fully automated computer program called Learning Everything about Anything (LEVAN) that teaches everything about any visual concept.

LEVAN’s creators call it "webly supervised," as it uses the web to learn everything it needs to know.

The program essentially searches millions of books and images on the web to learn all possible variations of a concept and then displays the results in the form of a comprehensive, browsable list of images.

It scours web and various libraries to learn common phrases associated with a particular concept, then searches for those phrases in web image repositories such as Google Images, Bing and Flickr. The relevance of each term is ensured with the content of the images found on the web and identifying characteristic patterns across them using object recognition algorithms.

For instance the algorithm in the program understands that "heavyweight boxing," "boxing ring" and "ali boxing" are all part of the larger concept of "boxing," and on a query from a user, will display results.

But the program filters only visual phrases. For example, with the concept "horse," the algorithm would keep phrases such as "jumping horse," "eating horse" and "barrel horse," but would exclude non-visual phrases such as "my horse" and "last horse."

UW assistant professor of computer science and engineering Ali Farhadi said, "It is all about discovering associations between textual and visual data. The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them."

So far, LEVAN has modeled 175 different concepts. Users can also submit a concept and the program will automatically begin generating an exhaustive list of subcategory images that relate to that concept.

The program was launched in March. Researchers are still working on increasing the processing speed and capabilities.

This research was funded by the U.S. Office of Naval Research, the National Science Foundation and the University of Washington.