Neural networks: who needs them? What practical use are they? And are we about to see them hyped as a panacea just as expert systems become non-U? David Bounds and Paul Gregory, both directors of Marple, Stockport-based Recognition Research Ltd, recently took it upon themselves to explain and justify neural networking – a technology that they claim encompasses artificial intelligence. Bounds points out that in real life people do not learn by explicit logical programming, they have certain associations and patterns reinforced in their minds. For example, people do not learn what the colour red is by understanding the absorption and refraction of light, they learn by association with certain objects such as pillar boxes. It is this type of associative learning that neural networks attempt to emulate. However, although the technology attempts to mimic the workings of the brain this is merely a loose analogy as computer science is only in a position to copy a little part of the process of the biological neural network. The part of the process that is copied is the training by example. Paul Gregory defines neural networks as a parallel distributed information processing system that consists of processing elements interconnected by signal channels called connections.

Training algorithms

The output of each processing element can be connected to the inputs of other processing elements via these connections. Each connection has an associated weight which determines the strength of the signal passed along the connection. Such networks are ‘programmed’ by applying training patterns which fix the output states of some or all of the processing elements. A learning algorithm then adjusts the connection weights in response to the training patterns. Of course research into the technology goes back a long way, but thanks to the efforts of a certain individual called Marvin Minsky, who co-wrote an influential book in 1968 with Seymour Papaert, funding for neural network research dried up in the 1970s, because Minsky advised the US Department of Defence that symbolic processing which evolved into expert systems – was the way to go. In the early 1980s the technology resurfaced because Hopfield wrote a physics paper reformulating the problems. Over the past decade the technology has gained momentum because of the desktop power offered by workstations and possible VLSI implementations of neural networks, more of which later. However, one very important development was training algorithms for multi-layer networks. To explain this point, Bounds gave the example of an experiment whereby a lot of medical data was gathered from obstetricians in the form of symptoms and the related diagnoses for back pain. Multi-layer networks enable a pattern of symptoms to be coded so that when the right combination of symptoms occurs it triggers the right diagnosis via hidden nodes or feature detectors that recognise useful features in the training pattern thus adjusting the connection weights to match certain symptoms to certain diagnoses. Consequently, what neural networks do best is classify patterns and offer a generic technology so that patterns can be classified outside the training set – in other words they generalise their knowledge. Having established that neural networks are back in vogue, the question is, so what? Does it offer any commercial advantages? Recognition Research’s Paul Gregory outlined the benefits of the technology as follows: a neural network can use data in new and different ways because it can recognise patterns; it can generalise from training on cases it hasn’t encountered before offering potential for general management functions; and it has the ability to capture real world knowledge because it doesn’t use linear processing.

By Katy Ring

However, that being said, neural networks are not appropriate for all users that need to look for areas where neural networks can reap financial gains from an improvement in effectiveness of 5% to 10%. Applications can be built much faster using a neural network because once the tools e

xist to develop a network any data can be put into it. But users have to calculate whether they have enough good quality data to train a neural network successfully. In general the financial and banking communities have plenty of such data whereas the retail sector doesn’t. The other thing to bear in mind when considering neural networking is that like everything else in computing it is not a stand-alone panacea. Rather it often needs to be integrated with a database and a management information system in order to get the best results. Indeed, there are two approaches to implementing a neural network: software running on personal computers and workstations and networks embedded in equipment via special purpose integrated circuits. It will not come as a surprise to hear that Recognition Research is addressing both types of implementation. The company has become involved with a UK consortium to develop a real-time single chip digital neural network. Recognition is putting the software together, while Neural Technologies Ltd is designing the specification for the chip and Micro Circuit Engineering Ltd is building the memory chips. Together this all adds up to the NT404 device – a Neural Instruction Set Processor that plugs into personal computers. In neural networking terms it offers eight layers, 65Kb synapses and 8Kb neurons and supports the design, development and series manufacture of such networks.

Level5 Object

In the US two companies in particular are developing neural network chips. Intel is developing an analogue device (CI No 1,392) and Neural Semiconductor Corp is at work upon a general purpose digital neural networking chip. However, the UK consortium is designing a niche product, designed specifically to go into a system as an input-output device for a controller. As for fitting neural networks into a corporation’s wider use of information technology, Recognition Research is at work here too, since it is collaborating with Information Builders Inc to combine AutoNet with the Level5 Object application development environment. Hitherto, AutoNet has been sold for use on stand-alone desktop machines where a neural network can be trained with data from any ASCII file from a spreadsheet or database. But when it is integrated into the Focus environment AutoNet will be able to work in a corporate network. When up and running the Level5 Object expert system can ask the operator to add a particular value or will access the database to get that value and is, therefore, in a position to build a good vector for the neural network to work on. The solution to the problem is then fed back from the neural network to Level5 Object. But control is given to the expert system that decides which neural network to use, which data to use and so on. The product is currently in beta test and will be marketed shortly. The process will be transparent to the end-user who will simply select options from the Windows 3.0 environment. So in answer to the question, neural networks, who needs ’em? Anyone with plenty of good data who could do with back-up on those off-days when decision-making does not come naturally, or when panic sets in defence departments are particularly avid users.