This article is the first part of a three-part series from Software Futures, a sister publication
If you’ve got it, flaunt it, the saying goes, to which the truly ambitious would add, and if you might get it some day, flaunt it now. The computer industry, never a shelter for shrinking violets, is so accustomed to outrageous propositions that they’re regarded as commonplace. Nonetheless Bill Gates caused some astonishment when, on May 20th, he took the stage and launched Microsoft’s self-declared ‘scalability day’.
By Lloyd Blythen
For Microsoft now claims (‘believes’ may be an overstatement) that its server software is scalable to large-enterprise level. No, no it doesn’t. What it claims, as a straight-faced Gates echoed on May 20th, is: Any business of any size can now run its applications on Microsoft software and industry-standard hardware. Not large enterprise – the largest. Not in three months or three years – now. Got a bank? Get a SQL Server database. A hospital waiting-list? Windows NT is for you, they’re saying in Redmond. We’re no longer talking twenty desktop ducklings chaperoned by a mid-range Mallard; Gates and his mates maintain that their products are ready to replace mainframes and (especially) Unix servers, and that the only thing cheap about them is the price. Since ‘scalability day’ Microsoft has relentlessly hyped features and benchmarks in its push for the summit of the software market. And top-end customers are, if not all ears, at least prepared to listen. But are the features available, and are they reliable? Are the benchmarks relevant, and do they uphold Microsoft’s design principles or merely the Principle of Blatant Assertion?
Scalability day, like any marketing extravaganza, can be heeded or ignored at will, but benchmarks demand some attention. Several performance figures recently published by Microsoft have raised eyebrows. Typical – and topping the list – are the 1bn transactions the company achieved in a day, using Pentium machines running its NT Server 4.0, SQL Server 6.5, and Distributed Transaction Coordinator software. The attendant advertising is worded with exquisite care – as benchmark publicity always is – but the transaction rate is impressive and no-one has questioned its accuracy. Its relevance is another matter. Since achieving the billion benchmark, Microsoft has trumpeted it as evidence that its software is scalable. However, to reach the figure the company appears to have resorted not to scalability but to scalability redefined. A careful examination of the server testbed manifest reveals 25 clustered machines, each sporting four CPUs. Clustering – connecting independent machines into a cooperating group – is a valuable hardware solution to such problems as reliability and transaction bandwidth. It is an adjunct to software scalability – the extent to which a single package can accommodate additional load or make use of additional hardware – but not a means of achieving it. In particular, nobody defines clustering as a replacement for software scalability. Nobody except Microsoft, apparently: for the benchmark test the company adopted clustering, not as a necessity for demonstrating that its server software scales well but precisely because it doesn’t. Without clustering, NT Server would never have made it, due to the ceiling on the operating system’s processor scalability. NT can distribute processing loads over a maximum of four CPUs. (Add-ons from various hardware vendors boost this limit – as high as 32 in some cases – but they’re not standard, they don’t go as high as Unix does, and they’re not available from Microsoft.) Every machine in the testbed was therefore fitted with an individual copy of Windows NT – 25 copies of the operating system. That might look good for sales of NT, but let’s not confuse salability with scalability.