In one of the most convincing endorsements yet of the exceptional potential of the Inmos International Transputer parallel processing reduced instruction set microprocessor, the US National Aeronautics & Space Administration has built a Neural Network workstation out of 40 Transputers after rejecting several available parallel processors including the Connection Machine, the BBN Butterfly, the Ncube and Intel Hypercubes as either too inflexible or too expensive. Developed at NASA’s Johnson Space Center in Houston, the Neural Network Environment Transputer System consists of 10 of the Inmos board-level four-Transputer boards to form a 40-node sustem. The system uses an IBM Personal as the controller and also uses a graphics Transputer driving a colour monitor. The Microbytes newswire reports that the workstation is already up and running several different neural networking applications. NASA/Johnson already has a very fast neural network simulator running on its NEC SX-2 supercomputer – but time on that machine is very expensive, and the workstation will cost less than $100,000, a sum which buys less than two days’ time on the SX-2. The purpose of the workstation – which simulates the SX-2 simulator – is to reduce the time needed to analyse the ability of a neural network system to solve real world problems in robotics, vision applications and fault diagnosis, and will be used to develop neural net applications cheaply. Unlike other neural net simulators, which are confined to one or two types of network, the one at NASA should be able to simulate all known types of neural nets and make it possible to implement a new kind in less than a day. Once proven, the design will be offered to other NASA bases to duplicate, and will be released into the public domain.