Part three of a three-part series from Software Futures, a sister publication.

As Microsoft repeatedly points out, the purchase costs of Unix servers in general (and AIX ones in particular) are a good deal higher than those of comparable systems built from Microsoft software and commodity boxes. So are some of the administration costs.

By Lloyd Blythen

NT Server 4.0 offers virtually the same GUI as Windows 95, which makes most (but not all) management operations more user-friendly and (generally) quicker than on Unix. NT’s user-friendliness also benefits from its consistency: there’s only one NT, whereas different Unix sites favor different text-mode shells and the ‘standard’ Unix GUI – the X Window system – is available in numerous flavors and configurations. There is also a differential in staff costs; these vary, of course, but recently-published figures show typical senior NT sysadmins cheaper than Unix personnel, by about 10%. The desire to recover development outlay doesn’t excuse overcharging; there’s little doubt that Unix users are routinely held hostage to their need for high reliability. But it’s only possible because reliability is paramount in the large-enterprise sector, and before Microsoft is ready to enter that market it must demonstrate greater recognition of the fact. Windows NT Server 4.0 and server-side applications like SQL Server 6.5 represent young software, still developing to cope with the heavy loads of large enterprises and well short of Unix in reliability terms. When comparing the cost of Microsoft products with those of Unix systems, no-one at Redmond factors in the cost of a fatal seizure – or even admits to the risk of one. But, as a recent example showed all too clearly, that risk is real and not limited to certain combinations of load, software and CPU count as was the case at Bloor Research. Among the applications Microsoft frequently suggests for its server software is large-enterprise Internet and intranet service, and the company has ample opportunity – with one of the most active Web sites anywhere – to soak-test its products for this market. Yet only last month its own large Web server was brought down by a programming error in Internet Information Server (IIS), part of Windows NT Server 4.0. In fact ‘brought down’ is a charitable description; ‘squashed’ might be more appropriate. Few people had access over a two-day period, and many managed to contact the site sporadically if at all for almost a week. The servers remained up – they could be ‘pinged’ – but were capable only of reporting that too many users were attempting to connect. Too many users and two-day outages are not the stuff of scalable, enterprise-ready systems.

No Warning

Microsoft pointed out that the failure was triggered by a hacker, who discovered a bug that allowed IIS to deny service if fed with a particular Internet address. The company also claims that the attack coincided with a scheduled plan to bring more servers online at its site. But in saying this it is shooting itself in the foot. Users received no warning that outages were scheduled (let alone that they might be total and continue for several days), and the lack of progress reports combines with the drastic nature of the disruptions to suggest that a panic, not a plan, was in progress. If this was merely a procedural oversight then Microsoft’s view of service to users is at odds with the demands of the large-enterprise customers it hopes to attract; those customers put continuity of service at the top of the list, and put procedures in place to guarantee it. If it was not a procedural problem but a software one, then either NT Server is incapable of accepting a scheduled hardware upgrade without major disruption, or the ‘scheduled’ outage was in fact part of Microsoft’s response to the hacker’s attack. Neither says much for NT Servers’ systems-management facilities, which are frequently listed among its shortcomings. I believe the most likely explanation is that there was no ‘scheduled’ outage, and that IIS simply suffered a hack that left Microsoft at a loss and its users in the dark. To its credit, Microsoft has apologized for the disruption and fixed both its site and its software. But one fixed bug won’t convince large-enterprise customers that NT Server is sufficiently reliable for their needs; whether for two days or two hours, some outages are just too scary for some purchasers to contemplate. Bugs happen. They happened to Unix, when a programming error in its ‘sendmail’ system allowed Robert Morris to launch the Internet worm. But as software matures, bugs arise less and less frequently, and it’s this maturity for which high-end Unix customers are prepared to pay – through the nose if necessary. Microsoft can wail about hackers as loudly as it likes; any user of its software who strikes a bug is going to see itself, not Microsoft, as the victim. Hackers certainly pose a problem, but they can’t exploit bugs that don’t get shipped and, when they do discover bugs, they’re only finding what high-end customers will expect Microsoft to have found already. Reliability won’t come overnight to Microsoft’s systems and, because they’re still growing, the percentage of new code means it will come more slowly than if they were already powerful enough to scale for the large enterprise. But no-one doubts that Unix is eventually going to be sidelined for specialist applications, in favor of Microsoft server software. At the upper end, Windows NT Server is predicted to take 40% of the large- enterprise market by the turn of the century. Clustering may be Redmond’s foot in the door to this market, but it’s not going to take the place of improved scalability. For a start, in view of licensing considerations and the questions surrounding NT Server’s reliability and management tools, large enterprises simply won’t accept clusters that require huge numbers of copies of NT on boxes limited to four CPUs each. Microsoft, of course, has other plans anyway. The company sees SMP as the immediate future of its multi-processing core (not the alternative Non- Uniform Memory Architecture, or NUMA), and intends that clustering will play an ongoing role in delivering high-end grunt. Its Cluster Server technology (commonly called WolfPack) is a key ingredient, although it missed the original April shipping target and looks more likely to arrive in Q3. At that stage WolfPack will support only failover across two servers; multiple-server failover isn’t expected until at least early 1998, and the ability to spread applications across multiple servers is more than a year away. Cluster-aware versions of SQL Server 6.5 and the Exchange groupware product have also been announced for delivery within the next four months. Meanwhile Microsoft is working to boost NT Server’s ability to scale across more than four CPUs. With NT version 5 the company is leaping on the bandwagon of compatibility with the Hewlett-Packard/Intel 64- bit ‘Merced’ processor but, like WolfPack, NT 5 is suffering delays. In the interim an ‘Enterprise’ (read ‘stop-gap’) edition of NT Server 4.0, announced at ‘scalability day’, will utilize up to eight CPUs and is scheduled for delivery in Q3.

Conclusion

There’s no doubt that Microsoft will eventually compete at the top end of the server market. That the company’s software will continue to improve in speed, power, reliability, and user- friendliness is clear too: no software vendor comes out with a less-capable package than the version before. But it’s hard to avoid the conclusion that, against entrenched, well-equipped, veteran vendors, Microsoft is proposing to send a software army barely out of short pants. Impressive benchmarks, development plans among its partners and at Redmond, and all the advertising in the world won’t displace one Unix box from a truly large, mission-critical enterprise. As for the scalability of its current server systems, the last word goes to Jocelyne Attal, IBM’s VP for (paradoxically) Windows NT marketing. NT doesn’t scale today and everybody knows it, Attal has said. By any measure, except Microsoft’s, she appears to be right.