We outlined in CI No 594 one scenario for IBM’s progression from Sierra into Summit and thereafter – and while there may be other observers who quibble about the details – perhaps the number of channels or the maximum main memories will differ a little or whateveb, the overall game plan outlined is quite unexceptionable, and would receive endorsement from almost all current IBM watchers. But, as we pointed out at the end of the piece, it is derived from a very intelligent and judicious mapping the past onto the future, taking into account the features that are theoretically possible within MVS/XA and the performances that will be available from the semiconductor industry for the memory and logic circuitry. But it assumes IBM will retain its monopoly position in the major data processing institutions of the world, and in light of events in 1986, such an assumption is a decidedly dangerous one.
Seen as grudging
There is no reason to doubt that the basic 9370 processors would have come out in some form or another last year or this – but the precise form in which they were announced – features like support for Ethernet, a stress on Unix – were dictated not by IBM’s internal wisdom but by intense competitive pressures from outside. So many governments around the world – the US, Swedish, Danish in the forefront – have deserted the IBM mid-range standard for industry standards led by Unix and Ethernet and Open Systems Interconnection – that many of the features in the 9370s are there only because without them, IBM would be shut out of billions of dollars of public sector tenders. And to the extent that they are there not because IBM wants them but because it can’t afford not to have them, they will be widely seen as grudging and users are likely to favour vendors that have embraced themwholeheartedly. Make no mistake about it, the 9370s are going to be enormous sellers – but they may well nevertheless not sell in the numbers that IBM requires. Success and failure are relative, but IBM’s commendable policy of not trying to balance the books by laying off its surplus employees predicates a very high level of absolutes. One of the most disconcerting conclusions drawn by William Husband’s projection of the 3090 to Summit and beyond is that IBM would at some stage need to dump MVS/XA for a new operating system. There is no question that by 1993 there will be a market for mainframes with 16Gb of main memory and 300 MIPS performance (that’s a thousand 370/168s under one hood!) – among financial institutions, oil companies, airlines. The question that is much less easy to answer is whether the market will be big enough to satisfy the development and support costs – plus the enormous mandated IBM overhead that will go with such machines. And that is before the possibility that yet another major conversion to a new operating system is postulated. The trend over the past decade has been away from monolithic mainframes and towards distributed processing. The trend has been read as being a move away from the ravaged Bunch companies and towards IBM, but that is only true in part.
Because for every installation that was lost by the Bunch to IBM, one or more installations was lost to mainframes altogether, and was picked up by the likes of DEC, Tandem, Prime, Data General, DEC and Tandem. The repetition is deliberate. DEC is offering an ever expanding power range within a single architecture, and comparatively simple within-range networking. Tandem is offering very powerful networking coupled with efficient transaction processing of rapidly-growing intensity. The past three years have seen an exceptional flowering of new computer companies bringing new computing concepts to market, a flowering that hasn’t been seen since the late 1960s and early 1970s. And traditional IBM users like Citibank are taking companies like Teradata with its parallel database machine very seriously indeed. Alliant Computer Systems with its FX line of distributed parallel processors, Sun Microsystems with its Network File System, Bolt Ber
anek & Newman with its Butterfly parallel processor, Sequent Computer Systems with its intensive transaction multiprocessor have to be taken very seriously indeed. And, whatever its shortcomings in commercial data processing, while MVS/XA runs on a comparative handful of top-end machines, Unix is supported on everything from 8088 microcomputers to the Cray 2. And while many of the newcomers will disappear, the most viable of their ideas will live on – while Flexible Computer Corp may not survive its cash crunch, Parallel Computers – fault-tolerant Unix machines – is being acquired by that hard line survivor General Automation. Companies like MIPS Computer Systems with its high-performance RISC boards have enough to offer that if they don’t make it on their own, they will be acquired either by the surviving minimakers or by the top echelon of workstation manufacturers – the Apollos and Suns of the distributed processing world. If IBM really is faced with another major operating system switch at the top end, more and more users will have to ask themselves whether it really makes sense any longer to try and bend and shape obsolete 360 architecture to ever more exotic modes of processing for which it was never designed. Whether it would not be better to take a long hard look at their multifarious applications, break them up and spread them around the various machines and architectures specifically designed to handle those applications most efficiently and cost-effectively. Lloyds Bank currently runs its Cashpoint network of automatic teller terminals on dedicated IBM mainframes: current accounts are updated overnight in batch from the transaction file created during the day, and a new memo file loaded for the next day with updated current account balances. What the bank and its customers want is for the whole recent current account record to go on line and to be updated in real time. It isn’t, yet, because it would demand such an enormous complex of IBM mainframes to handle it all. But will that giant complex of IBM mainframes ever be the right way to solve the problem? Wouldn’t a network of Tandems or VAXes maintaining much of the account data within the branches, and the IBM hosts reduced to a role of central database processing, accomplish the task more efficiently and cost-effectively? Maybe not, this time around, but in 1993? IBM’s problem is that it can only come up with more and more and yet more of the same. Top management is so bereft of the kind of courage that resulted in launch of 360 that even when the backroom boys do come out with brilliantly original and innovative products like VM/370 and System 38, they are regarded with the utmost suspicion by the top marketing people, who thereby ensure that they are commercial failures. The same fate seems already to have been mapped out for the RT Personal.
And with that kind of tramline mentality at the top, while it is certain that IBM’s back-room boys can come up with IBM’s own parallel processors, fault-tolerant transaction processors, they will be so starved of funds, resources and support that once the need for such products becomes inescapable, IBM is forced to buy in the critical product as it has done with Rolm’s PABXs – only switching computers after all – and Strates Computers’ fault-tolerant System 88. And so while IBM’s top management is certainly confidently planning for Summit, it is far from clear that the machine will provide the same kind of effortless success achieved for the company by the 308X line. Way back in 1977, Dr Gene Amdahl gently rubbished the performance of the 3033 by comparison with that of his own 470 machines, and then said that he was very impressed by it. I think it is the best that IBM could have done in the circumstances. Will IBM’s best be good enough for the world of the 1990s?