A system that can recognise faces was one of the products demonstrating the capabilities of neural networks at the IEEE First Annual International Conference on Neural Networks, held in San Diego last week. Hecht-Nielsen Neurocomputer Corp, based in San Diego, showed its IBM AT-based face-recognising system, which uses the company’s new Anza co-processor board. Although the system was shown off recognising faces, the Anza system allows users to design and create simulations of neural networks and is aimed at database searching and robotics applications as well as pattern recognition. In the demonstration, visitors were invited to sign on to the system and stand in front of an ordinary video camera to have their faces digitised. They were then asked to go away and come back later, whereupon the system would identify them by putting up their original image on the screen and speak their name – even if they changed their expression or tilted their head at a different angle. Hecht-Nielsen also provided some disguise props – noses, moustaches, beards – to demonstrate the neural network’s nearest neighbour fuzzy matching capability.
Each new face
According to Tony Materna of Hecht-Nielsen, the only other face-recognition system with similar capabilities was done here in the UK a couple of years ago and consisted of about 30,000 8-bit microprocessors in a parallel processing arrangement. The Anza board, consisting of about 300 neurons and 13,000 connections, has the capacity to memorise 100 faces – and each new face uses two more neurons. The Anza board will be able to implement a neural network with up to 30,000 neurons and 480,000 interconnections. The network uses a new counterpropagation paradigm, invented by Dr Robert Hecht-Nielsen, that enables the face recogniser to operate and learn at the same time where many other paradigms require a separate learning phase. For the face-recognition demonstration, 36 spatial frequencies are derived from a 14-second Fast Fourier Transform of a 32 by 32 – 1,024 – pixel image. The actual processing by the neural network takes less than one second, the company says. The lowest spatial frequencies represent gross facial features such as the width and height of the face; mid-frequencies represent features such as noses and cheekbones, while complexion and other fine details are represented by the highest frequencies. The network is arranged in three layers, or slabs; the first slab contains 36 neurons, one for each of the spatial frequencies derived from Fourier transform; the second and third slabs each contain one neuron for each face memorised. The Anza neurocomputer co-processor board lists for $9,500 and ships in the middle of next month.
Also on show at the exhibition was a Macintosh program for simulating neural networks. The program was shown by Neuronics Inc, of Cambridge, Massachusetts and Matt Jensen, who developed the software, claims it’s the only neural net simulation environment to sell for less than $10,000; in fact, it sells for just $250. Called MacBrain, it runs at 25,000 connections per second. What Macbrain is is a very simple way, a very graphic way, of simulating the neural nets, says Jensen. Basically you can create neurons on the Mac screen, connect them together just by moving the mouse around; you can perform commands on the system just like regular Mac programs. It’s aimed at people beginning to explore the abstruse world of neural networking – which nevertheless is currently the most promising technology for simulating human thought processes – as well as those that already have a grasp on the technology. Our first target market is made up of the low-end, non-technical people, says Jensen. Primarily that includes students, grad students, psychologists, and non-computer people working in fringe fields that have some overlap into neural network theory and its applications. It’s for the sort of people that don’t want to get too heavily involved in mathematics but just want some idea of what this technology can do for them, and want some result
s they can see visually. It is very quick and easy to get things up and running and adjust parameters interactively. MacBrain runs on the Mac Plus, SE, or II. For those who already have a grasp of the technology, it is said to contain an interpreter and paradigm shells and to enables users to create their own multiple paradigm shells. The company says that it is equipped to simulate adaptive resonance, the Delta rule, Boltzman machines, and Hopfield nets, whatever they may be – and an August update is set to support Transputer-based boards. That version will also offer two programming languages, one text-based and one graphics-icon-based, so users can create their own types of paradigms and rules. Nestor Inc of Providence, Rhode Island, showed several applications designed a round its Nestor Decision Learning System, including object, character, and handwriting recognition systems, adaptive expert systems, and a toolkit for developing neural networking applications. They all run on AT-compatibles, Sun, or Apollo workstations. SAIC, Scientific Applications International Corp of San Diego showed two systems. The Sigma-0 is an AT-alike without artificial neural network shells and is capable of 10,000 connections per second. The Sigma-1 comes with a mouse, 1Mb memory, 30Mb hard drive, a full C compiler, artificial neural network shells under Microsoft’s Windows, and a high-level language, Anspec.
The shell software package has six neural network simulations. The Sigma-1 runs at 10m interconnections per second, but with additional boards can run at 30m interconnections per second. Scientific Applications’ defence division showed off a Generic Interactive Neural Network Simulator, a Lisp-based software package running on a Symbolics workstation. Verac Inc, another San Diego-based company, demonstrated various systems developed under government funding, including several associative memory systems and fuzzy cognitive maps for knowledge combination and processing for unsupervised learning procedures. TRW Corp, yet again through its San Diego division, showed its Mark III artificial neural network system, one of which has just gone in at Massachusetts Institute of Technology’s Lincoln Labs. (When Computergram called TRW from London a few weeks back to find out a bit about the Mark III, the company told us, sorry, you’re not US nationals so we’re not permitted to tell you anything!). The Mark III demonstrated five experiments – multiple target tracking from radar hits, which also has potential in air-traffic control systems; image recognition of aircraft; helicopter radar return recognition using nearest neighbour algorithms; a neural net for artificial intelligence applications (using an if-then- else rule system) that learns by back propagation; and a self- learning system that teaches itself feature detection.