Vendors of parallel processing systems got together with current and potential users of the technology at the second annual CAPPS Commercial Applications of Parallel Processing Systems conference in Austin, Texas last week to share their knowledge and insight to transfering these systems beyond the realms of academia and scientific research to the commercial arena. The focus was on looking at how people are applying parallel processing more effectively to current application areas such as database mining and to new areas such as finance and logistics applications. The conference began with promise. 1994 is the year parallel computing has arrived, said Irving Wladawsky-Berger, general manager of the Power Parallel Systems Division at IBM Corp in his keynote address. He argued that a revolution in the technology is starting to unfold. If you give the computer an attractive price and lots of applications it will take off. We are at this crossroads in parallel computing. The software is finally becoming available he added.
Cut the cost
The tutorial session dedicated to issues in developing business applications on parallel processing machines set out to explain how new commercial business opportunities can be created using parallel processing to optimise larger business problems than ever before. Unfortunately the expectations did not hold out. Andrew Whinston, professor at the University of Texas Business School says that because of the amount of work that has to go into developing general parallel applications for solving huge and complex problems, an alternative solution would be to have parallel computers acting as servers across the Internet. People who have developed applications could then share them globally. Crossing the world on the Internet to find optimisation software already written by someone else would cut out the cost of having to buy the hardware and of developing the software, with the convenience that people can share and use the applications worldwide. It would also avoid the task of disparate developers working simultaneously to solve the same complex problem. Whinston says that this is where the niche for parallel computers lies. The user would then only have to pay for the Internet connection. On the surface, quite an innovative idea. But it would not be much use for anyone other than the light to medium user. Perhaps for school teachers who need to work out timetables once a year so that classes don’t clash. But it wouldn’t really solve the problem of a major airline trying to schedule worldwide flight paths for hundreds of pilots on thousands of possible routes in real time when adverse weather conditions have caused several flights to be cancelled. There is also the problem of limited bandwidth. The client software would need to carry out data manipulation and data would need to be compressed and encrypted before being sent across the network. Another problem would be congestion and the availability of the server, which Whinston says would be an issue to be sorted out with the service provider. Unfortunately this also does not help to improve the credibility of parallel processing or help make the technology itself any more useful.
If you attend the second annual staging of an event devoted to a hot emerging technology, you don’t expect speaker after speaker to admit sadly that the technology is probably a dead end – but that was what happened at the CAPPS conference in Austin last week. Abigail Waraker tells the sorry tale.
Parallel processing is not widely accepted in the market. Companies that are well known for their work in the parallel processing business are facing difficulties right now. In August massively parallel systems pioneer Thinking Machines Corp filed for Chapter 11 court protection and last month Kendall Square Research Corp announced that it planned to cut back its staff and development in light of the failure to receive expected orders and the inability of the company to raise additional capital to finance the business. There is some great hardware and low level softwar
e, but its not going to make some money, says Whinston. The general opinion among the delegates seemed to be that the hardware for parallel computing is pretty much there. The problem is the software. Or rather the lack of it. As there are no significant software applications available to potential users of parallel machines, they can’t get sufficient business advantage out of using them without investing huge amounts of time and resources into developing their own software. In these circumstances it would need to be a large company that can spare the resources to invest in developing software for an already expensive hardware system, and the problem being solved would need to be integral to the company’s operation and significantly more efficient or add notable value to justify the expense. This goes some way to explaining why it is generally larger companies who are cited as successful users of parallel processing systems. Another problem is it is incredibly difficult to write software for parallel systems anyway. There has not been enough investment to get general applications on to the market both because of the development costs and because there just hasn’t been enough money to be made out of software even if it were to be developed for the general market because the cost of the parallel processing hardware. Glenn Graves, professor of Management Science at the University of California at Los Angeles also admitted that there is no commercial impetus. That’s the real hang-up. Algorithms have to be designed. So where does this leave Wladawsky-Berger’s statement that the time is now for parallel processing? We can’t give a general procedure of making an algorithm run in parallel, says Jan Stallaert, professor at the University of Texas. It remains a hugely complex task. Successful operational examples of massive data mining systems do exist, but for applications in the areas of logistics or finance, for example, the task is particularly complex.
Wrong path
Stallaert argues that to make an algorithm run in parallel, the structure of the problem has to be exploited. As different classes of problems have the same inherent structure, he says, the approach could be to focus on certain categories of problems with the same inherent structure that are trying to be solved and work from the ground up. Then once a solution for one type of problem has been solved, the same solution could be applied to other similar problems, or lesson learned from that development process could then be applied to other similar problems, rather than looking to solve the problem of programming for massively parallel systems in general. The developer would focus on the domain area where the inherent structure of the problem lies and then exploit that structure and build on the success of that specific application area. This also gets around the problem of debating which type of parallel processing is best. Instead, this no longer becomes the key issue. Rather the individual application is where the focus lies. Unfortunately the tutorial focusssed too much on these semantics of the problem of creating applications for parallel processing systems, which left delegates coming away with the impression that if even these experts were not able to offer any real hope on how to program for the complexities demanded of massive parallel processing, then perhaps it is the wrong path altogether. The feeling was that maybe we will have to wait until commercially available chips become so much faster that they can cope with the massive amounts of data needing to be processed, and that massively parallel processing will fall by the wayside except for the niche areas of scientific applications.