I read with great interest the recent Merrill Lynch in-depth report called the Server Performance Monitor that the brokerage house’s server and enterprise hardware analysts put together and which Computergram briefly summarized. One of the basic tenets of the report, which I do not necessarily wholly agree with, is that Users don’t buy benchmarks, they buy a relationship. It is Merrill’s belief, as is the case with many computer industry analysts whose jobs don’t depend on buying servers and making them do real work, that performance, and implicitly price/performance, are just one element of a computer company’s strategy and should not be overemphasized by user companies who need to buy servers to support their businesses. Especially if you are trying to peddle IBM stock, we suppose. Merrill’s right, people don’t depend just on benchmarks to buy machines. There are prejudices and preferences in the computer business, just as there are in every other human endeavor. Some companies buy IBM mainframes because they always have and they have always worked; more power to them. Other companies change platforms every five or ten years because their vendors and their programmers never seem to get their respective acts together. There’s a lot more anger out there in the glass house than anyone ever talks about, and this, too, drives purchases. I won’t buy a Dodge ever again for much the same reason, and I wouldn’t marry a redhead, either. Bad experiences as much as good ones drive what companies do. Suffice to say, while Merrill doesn’t seem to think that server benchmarks are important, I think that even with all the gaming that goes on within the TPC-C or SAP R/3 or BAPCo tests, these benchmarks are important for three simple reasons: Tests provide a price ceiling for servers. Companies buying servers should simply never pay more for a server than the latest TPC-C result for that platform, regardless of vendor and nearly regardless of the platform. If you are moving to a new platform, there has never been a better time to make a vendor take a loss in the coming quarter on your account, because believe me, they are going to make up for it many times over in the coming years as you upgrade. The TPC-C test is a bargaining tool more than it is a system sizing tool. Hit vendors over the head with it and get the lowest possible prices you can.

By Timothy Prickett Morgan

They provide a performance ceiling. No matter what, you will never get a server to behave as well as the vendor does. If a sales rep says the server they are selling can support 3,000 users running a simple set of financial applications and you know that it can’t support more than 250 SAP R/3 Sales and Distribution users or 6,000 TPC-C users, then you also know something doesn’t add up. As for the critics that think TPC-C is not suitable for capacity planning, how come IBM uses it to size up the complete range of AS/400e servers instead of its own RAMP- C online benchmark? (The AS/400 Commercial Processing Workload, which IBM uses to show the relative performance of processors and upgrades, is nearly identical to TPC-C.) If it is good enough for the extremely conservative AS/400 division, it is good enough for you. They allow performance and price/performance comparisons across different server architectures. I don’t think any of us wants to go back to the bad old days when benchmarks were not available across different platforms. Yes, the popular benchmarks have limitations, but they allow relative power and price comparisons across platforms and among different workloads. TPC-C has its limitations, but a half dozen processor and system benchmarks including TPC-C, SAP R/3 SD, SPEC integer and floating point, Webstones and so forth give a pretty good picture. It is a whole lot easier to make comparisons in 1998 than it was in 1988, and I think people take this for granted.

Open systems benchmark efforts

To its credit, IBM has always spearheaded open systems benchmark efforts in the midrange, first with RAMP-C and later with the TPC-C suite. IBM is always trying to backhandedly convince the world that its mainframes are the most scalable servers on the planet, but never really backs up the claim. Ross Mauri, vice president of hardware development for the S/390 mainframe line in IBM’s newly constituted Server Division, says Big Blue has run the TPC-C test on MVS systems, but never released them. You know why? We’ll tell you. First, a G4 9672 would probably top out at 30,000 or so TPC-C transactions per minute (TPM); an HP V2250 Unix server with 16 processors can crank through over 52,000 TPC- C TPM. (That’s why IBM focuses on the relationship instead of the hardware.) Even a G5 S/390 server, coming in the third quarter at twice the MIPS, will not be able to best the HP V2250 by all that much – and it will cost considerably more. The HP PA- RISC 8500 and Merced servers coming in the next 12 to 18 months will have as much single engine power as the ECL mainframe engines used in Hitachi’s Skyline processors – the current fastest commercial processors in the world, twice as fast as DEC Alphas and three times as fast as Sun UltraSparc IIs. S/390 G5 won’t come anywhere near this, at least on new workloads like SAP R/3 or PeopleSoft or TPC-C for that matter. By not running, and more importantly, not winning, the TPC-C tests with mainframes, IBM concedes defeat in the very same new workload markets that it will target with the G5 S/390s. But all is not lost. The mainframe wins hands down on highly tuned, S/390-specific batch jobs, but that’s only going to be worth $3bn a year very soon.

Marketing speak

Finally, I would like to comment on the notion that Merrill Lynch has put forth in its study that companies are more interested in solutions and a relationship with a vendor than they are interested in picking the most cost-effective platform to support their application workloads. This is exactly the kind of marketing speak I hear constantly from all the computer vendors. Pishtosh! That is not what drives companies to buy servers, and it certainly is not what keeps them coming back for upgrades. Witness the installed bases of SAP R/3 and PeopleSoft, for instance, who are signing up for NT servers regardless of the fact that we all know that, technically speaking, Unix servers are far better for the job. Many of the companies using R/3 or PeopleSoft have been locked into expensive and formerly proprietary platforms such as IBM mainframes and the first chance they can get out – in this case, throwing out legacy apps for third party software and low-cost servers to run it – they do. And they are going for the NT server platforms that cost one- fourth as much as mainframes and one half as much as Unix. This year, the NT installed base for at Number One application software vendor SAP will surpass Unix and mainframes added together, and about mid-1999, the same thing will happen at Number Two software vendor PeopleSoft. These companies created the client/server application market and they are no happier about NT taking over than all of us are – they make more money on Unix and mainframe software. NT rolls on, nonetheless, and SAP and PeopleSoft will emphasize it if they have to compete against each other and Oracle, Baan, JD Edwards and the other half dozen major ERP players. What companies typically do is buy the least expensive platform possible to support their applications because they can’t afford more reliable or scalable computers – this is how Unix fought mainframes at first, and it will be how NT ultimately relegates Unix to the very high end of the server market – or because they are duped into not spending more money on better servers. Eventually, they buy bigger and bigger processors once they choose a platform because their batch jobs – not their online transaction processing jobs – require it. People pick systems based on online throughput, these days using TPC-C or R/3 SD as a first pass estimate, then they use their own workload benchmarks to size things up. But what no one talks about is that this is not the main determinant of processor size or capacity at any company. Batch jobs, more than anything else, determine how much more iron customers have to buy, mainly because these workloads eat much more resources than online jobs and typically require faster processors to meet the ever- shrinking batch window each night, week or month. Almost every company that moves to a new platform ends up buying two or three times as much iron as they thought they would have to. No one is saying that this is right or good, but this is what happens and no one, not even Microsoft, is talking about it. And to say that this does not happen with the IBM mainframe base does not mean that mainframes have some sort of magical powers; it just means no one is porting new and relatively unknown workloads to the machines.

Inertia

Only a small minority of companies jump platforms for their core applications in the course of a decade. Anyone who has been in the computer business for more than a few years knows full well that inertia, more than any other factor, is what determines the servers that customers keep using. Why does IBM still have a mainframe base? Because that is what three-quarters of the top 4,000 companies in the world have always used to support big data processing jobs. They know NT is cheaper, but that’s about all they know about NT – other than the fact that NT isn’t nearly as reliable and its servers are not as scalable as IBM’s S/390 mainframes. So what? An AS/400 Apache or RS/6000 Raven server will kick the living tar out of a G4 S/390 server, and both the AS/400 and the RS/6000 are, technically speaking, much better computers. They have better and cheaper technology and, more importantly, better software tools. A Northstar AS/400 will be able to best the G5 S/390 mainframes that IBM will announce on May 5, but these companies don’t care because their whole world is tied up in MVS. On this, Merrill and I would agree. But I say that it is not a relationship with IBM that the mainframe base holds dear. These companies would drop their mainframes in a heartbeat if they could get the resources and the courage together to try they’d have to stake their jobs on it, and there just isn’t that much to be gained from it unless the mainframe applications are so antiquated that it makes sense to replace them with third party software. They don’t stay with their mainframes because they love them, they do so because they hate their alternatives. But they boxed themselves into this position by writing their applications close to the iron on a system that, unlike the AS/400, allows them to. Whatever money mainframe customers saved by fine tuning their applications over the past 20 years for the S/390 architecture they are paying back with compounded interest today. And I think if most of the MIS managers at IBM mainframe sites were not retiring in a few years, more of them would be jumping to new platforms and installing applications that use either DB2 or Oracle on an open system platform – whether that is OS/390 on a S/390, Unix or an IBM or HP server or NT on any number of name-brand PC servers. Most mainframe customers only have a few hundred end users, after all, not tens of thousands. They may yet move, too: there’s probably a younger Unix or NT systems manager below the MIS manager, gunning for a promotion and a project to show off with. Relationship, indeed. Solutions, indeed. But Merrill Lynch, don’t forget to remind your clients that MIS managers and their executives are also guessing based on their own past experience, on what they think the future will be and on what benchmarks, which give them a rule by which to measure all those disparate servers, tell them about various platforms.