Continuous speech recognition has been taken one step further by Tony Robinson, a researcher at the Department of Engineering at the University of Cambridge. At the Neural Computing Applications Forum’s quarterly meeting in Cambridge recently, he demonstrated the speaker-independent recognition system he worked on for his doctorate, which recognises speech in context in something like real time. It works by recognising phonemes in speech.

Five deaths

The recognition is made more accurate because the system has been trained by 100 speakers, who spent four months reading excerpts from the Wall Street Journal to make a database of 20,000 words. The system can recognise general speech, but is specifically trained on the grammar for that newspaper. It uses a multi-layer perceptron type of network with feedback to carry out pattern classification on the speech. The demonstration showed that the technology still has development to undergo. The spoken words one, two, three, four, five, six came out in text as who want to free for five deaths. They sound similar, but clearly the meaning is way off – although he added that it works better for someone with a clear speaking voice, which he said he doesn’t have. So someone like a newsreader might come up with better results using the system. Even though the system is speaker-independent, recognition does improve if the same person uses it regularly. The disadvantage with this system is the amount of time it takes to train. In addition, the database of speech information is huge – it takes up six CD-ROMs, each holding 600Mb of data. The advantages are it can run on a standard personal computer. The demo was running on a 60MHz Pentium machine with 60Mb memory and Soundblaster board fitted. This was connected to a larger Unix machine in the Department of Engineering at Cambridge University. There has been a lot of interest from companies and the University is now hoping to train the system on the grammar for a major UK newspaper. A 1Mb demo version of the system is available.