View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
February 5, 1989


By CBR Staff Writer

When it comes to benchmarking, IBM wants it both ways. Although happy to spend time singing the praises of its proprietary RAMP-C benchmark, it refuses to publish detailed testing specifications, automatically invalidating any competitors’ attempts to use it. This impasse was amply illustrated during a recent South Bank seminar given by John Laskowski, the manager of IBM’s Performance Evaluation Centre in Dallas. Revealing just enough to suggest RAMP-C – Registered Approach for Measurement Performance – Cobol – superiority, he promptly slammed Unisys Corp for offering RAMP-C performance figures at the launch of its new Micro A. Having scrabbled through the manuals, the best that manufacturers can currently hope for is grudging IBM recognition of some RAMPC-alike testing procedures. According to Laskowski, the merits of RAMP-C lie in its ability to test system performance in a commercial processing environment. Commercial workloads, argued Laskowski, are typified by few floating point calculations, moderate quantities of application and supervisor execution time, but comparatively large amounts of input-output activity. RAMP-C Cobol programs, he continued, are designed to measure these conditions in two ways – the represention of interactive applications through simulating users, and the testing of a full range of system resources, comprising disk, terminal, main storage and processor. Very simple Further credit is claimed for dividing RAMP-C transactions into multiple levels of complexity, and using operator think time, simulated keying rates, and input character count parameters to define a range of functions and complexities. Running terminals in multiples of five, a simple transaction constitutes a keyplus-think rate of 7.5 seconds, and uses 20% – one terminal of available terminal power. This changes to 10.0 seconds and 20% for an average transaction, 18.2 seconds and 40% for complex transactions, and 26.0 seconds and 20% within the very complex category. When compared with Debit/Credit, Laskowski claims a number of advantages. Chief of these is that Debit/Credit is a bankingbased benchmark, gathering results via a single, simple, file update-intensive application. Second is the number of different implementations which abound. The ET1 implementation, for example usually implies an external driver, while TP1 implies an internal driver and relational database. This, in turn, leads to muddled definitions of performance, particularly in relation to response time. Laskowski also claims that the stipulated scaling rules – 100,000 account records to 100 teller records to 10 branch records – are expensive to implement and consequently, may not be fully adhered to. Finally, Laskowski challenged the use of response time as an accurate method of determining system throughput, by arguing that results can vary greatly depending upon system configuration. By applying memory, disk and liberal amounts of fine-tuning, sub second internal response times can be gained without any real reflection of what the system is capable of, he said. By contrast, the conservative RAMP-C ensures that any bottle-neck is the processor. Initially, the processor is run until it is saturated with transactions; system capacity, measured in transactions per hour is then established as 70% of the speed attained at saturation point. The trade-off, said Laskowski is minimal; users gain an accurate impression of system capability, and an average response time, which tends to fall between the 1.0 and 2.5 second range. A further chance to discredit the Debit/Credit benchmark came with an exhaustive account of IBM’s responses to DEC’s current bout of benchmarking propaganda. At the launch of its DECintact transaction processing monitor (CI No 975), DEC entered the benchmarking fray by claiming transaction per second rates ranging from 6tps on the MicroVAX 3600, through to 53tps on two combined VAX 8810s. The company took equal pleasure in flaunting an externally audited set of tests, showing CICS/VSE-VSAM software running on an IBM 9377-90 an

d a 4381-22 notching up just 5.0tps and 7.3tps respectively. IBM, said Laskowski, was not very comfortable with these results, and subsequently appointed its own consultant to re-run the tests. The results attained placed the 9377-90 at 17.1tps, and the 4381 22 at 22.1tps. Confronted by this unwelcome set of new figures, DEC turned for salvation to simulation methodology, arguing that IBM’s results had been achieved by using 10 terminals for each tps measurement, whereas it had used 100.

IBM then re-ran the tests to DEC methodology specifications, and found that 17.1tps dropped only to 15.8, and 22.1tps fell back only to 20.8. The differing results, argued Laskowski, are a reflection of the inherent weaknesses of Debit/Credit benchmarking procedures. DEC response The current DEC response, for those still interested, is that it has yet to see full details of how IBM achieved its final results. In addition, DEC’s UK marketing manager Mike Hudgell believes that although not perfect, the Debit/Credit benchmark tests many important system functions, including screen handling, database, TP monitor and processor. For DEC customers, who can use VAXclusters and SMP multiprocessing to grow system capacity easily, response time is more of an issue, he added. IBM meanwhile, is looking forward to reading DEC’s account of how it arrived at its initial measurements – a report promised last year, but yet to be published. For those who see the benchmarking cold war developing into a bloody, protracted, and increasingly childish campaign, a glimmer of hope has emerged on the horizon. This takes the form of the Transaction Processing Performance Council, set up last August (CI No 992) and administered by Los Altos-based consultant Omri Serlin. Counting both IBM and DEC are among its 27 members, the council is currently investigating ways of standardising the Debit/Credit benchmark, in order to rule out exactly the kind of inconsistencies that provide both sides with continuing rounds of ammunition. Best news of all, however, is that IBM plans to offer its RAMP-C banchmark to the council as a standard, enabling its undoubted performance measurement capabilities to be adopted as a practical, working standard by all players.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.