IBM gets rivals roiled as it puts its Internal Use benchmarks into the public domain

A very misleading document. So say several industry sources on Processor Performance – IBM and Representative Competitors, ZZ19-8333-4, a reference card comparing IBM processors with those of the plug-compatible mainframe manufacturers. It is marked for Internal Use Only, but apart from passing the card to the press and putting it into the public domain, there are reports that the IBM Corp salesforce has been advised to give the card and its successor, ZZ19-8333-5, to customers during performance discussions. The advice was given to 200 IBMers attending an IBM seminar that took place in Dublin at the end of May, and Large Systems Performance Reference is said to have been a key platform. One of the most interesting presentations was 1991 Value for Money Performance Measurement and Partitioning Options. Using the same data and extrapolations as the reference card, sources say that it included essential information that customers ought to have in order to put the reference card in perspective. The data appears to be based not on the Large Systems Performance Reference database itself, but on the output of a personal computer-based planning tool, LSPR/PC 3.1., and IBM’s claims are founded on a combination of measurements, extrapolations and vendor claims. But the card doesn’t say which processors have actually been measured and which have been extrapolated. Where measurements were made, they were done using IBM’s standard ITR methodology, in conditions that are not described in the card. Also, although the techniques are described in a num ber of manuals, they are all for IBM Internal Use Only. Further, there is no scientific workload data, and the card fails to indicate that the batch workloads are exclusively comm ercial batch. Another criticism is that where measurements have been made, many are not curr ent, and there is no information on when the benchmarks took place. There is an interesting item on performance variation, with the variat ion relative to a 3090-180J, the base engine used in the 3090-600J. The 3090-600J is said to have a 13% variation, the AS/EX 100 has a 35% variation and the 5990-1400 is rated at 38%. These figures have raised a few eyebrows with several commentators saying that it is theoretically impossible to build a mainframe with a lower variation than that claimed for the 3090-600J, and the only way to build one the same would be to reverse-engineer the box.

Dublin meet suggests only IBM has the answer to users’ performance concerns

The Dublin presentation set out to provide a summary of the issues surrounding Value for Money, and quite rightly, examined the various sources of performance information. These include the press, leasing companies, consultancies like Gartner Group, customers, and vendors. However, the main message was that only IBM does a credible job since the press tends to be the mouthpiece of the last vendor it spoke to; leasing companies have a vested interest; and consultants appear to rely on vendor claims. None of the above do any benchmarking whatsoever, and that leaves only customers and vendors. The speaker went on to outline the criteria that must be met if an organisation is to be credible in the performance arena. The quality provider of performance data must have a benchmarking centre for customers to use and visit; use representative workloads so customers can relate to them; maintain constant workloads for use over many years and processor ranges; have a sound repeatable benchmarking methodology with no tricks; use sound analysis techniques that are understandable and acceptable; and publish benchmark details. IBM says that it meets all these criteria, but what of competitive vendors like Amdahl Corp, Hitachi Data Systems Inc and Comparex Informationssysteme GmbH? The first does have a benchmarking centre, AmPEC, and does use representative workloads. However, IBM says that the company fails to meet the last four criteria. Further, Hitachi Data claims to have a benchmark centre

at Santa Clara, but IBM does not believe that it is staffed by a solid team, unlike IBM’s benchmarking centres. And Comparex? Apparently no benchmarking facilities whatsoever. Other industry sources say that both Hitachi Data and Comparex do have benchmarking facilities, in fact the latter has two. The first is a dedicated facility north of Mannheim, and the company has access to parent company’s BASF AG’s data centre in Ludwigshafen.

The approaches to arriving at competitive benchmark data explained

The next topic was IBM’s performance methodology, and why it is that only IBM can provide reliable performance data. The benchmark centres have the latest processors and experienced teams, and Hitachi Ltd and Fujitsu Ltd boxes are occasionally measured in Tokyo. The workloads are representative of customer environments, using MVS Online, Batch and VM/CMS, and the method has been developed over 15 years with strict validation rules. The workloads are stable so that processors can be compared over several years, and ITR, unconstrained processors and equal CPU utilisations are essential to prevent distortions of comparative performance. The results are published in Large Systems Performance Reference, the books that describe IBM’s benchmarking process and results, and even if they are not normal manuals, they are available via account teams. Finally, IBM measures several competitive processors, said to be made in exactly the same way and with the same workload as the IBM processors, and the benchmarking results are summarised in the reference cards.