While clusters based on Unix, Windows, or Linux have been able to run the TPC-C test using parallelized databases, the test that Oracle and HP ran in conjunction with commercial Linux distributor Red Hat marks the first time that a clustered system and an SMP system using the same database and the same processors could demonstrate nearly the same performance.

Here’s the other reason why it is important: the cluster with a modest discount is a lot less expensive than a big SMP box with a very big discount. And at list price, the SMP box HP tested costs 2.3 three times as much as the cluster HP tested to deliver a little less performance. Most companies pay something closer to list price than HP and IBM show with their big, bad boxes.

Here’s how the three vendors set up the cluster they tested, which would have shown similar performance and price/performance if it were running Windows. (Don’t get hung up on the Linux aspect of this benchmark result. Oracle 10g, the future grid-enabled version of the Oracle database that is due this spring, is the real factor in getting clusters on par with SMPs.) Each node in the cluster was a four-way rx5670 Integrity server with 48GB of main memory and a 36GB SCSI disk drive. These servers had a total of 768GB of main memory, and they attached to 64 of HP’s Modular SAN Array 1000 disk arrays, which had a total of 93.2TB of disk storage. At list price, the cluster’s hardware cost $4.7m and maintenance cost $430,876 for three years.

All of the nodes on the cluster all ran Red Hat’s Enterprise Linux AS 3.0 operating system, which was just announced a few weeks ago, and Oracle 10g and the related Real Application Clusters and partitioning extensions to that database. That software cost just under $2.4m, including three years of support, or just under $38,000 per processor. This is actually less expensive than Oracle 9i Enterprise Edition for SMP servers, which costs $40,000 per processor plus $8,800 per processor for maintenance and support after the first year of free support expires. Even after hefty 20% discounts, Oracle 9i Enterprise Edition costs $2.3m on an SMP box – and then operating system costs go on top of that.

On the cluster, Red Hat’s piece of the action came to a mere $95,616 (with three years of support), so you can see how much a differentiator an operating system is in the total scheme of this cluster. Adding in client hardware and connectivity devices boosted the cost of the cluster to $7.8m, to which HP applied a 16% large systems discount. This cluster could crank through 1.18 million transactions per minute, yielding a cost of $5.52 per TPM.

Compare this to the cost of the HP Superdome tested running HP-UX and Oracle 10g only a few weeks ago. That Superdome had 64 1.5 GHz Madisons, just like the cluster, and even had 25% more main memory, but it actually did less work at 1,008,145 TPM. That server ran HP-UX 11i and Oracle 10g Enterprise Edition, and it cost $8.33 per TPM after a staggering 48 percent discount on hardware, software, and maintenance. The Superdome server cost $7.1m, with the 1TB of main memory accounting for $5m of the cost. HP-UX and Oracle 10g cost $1.4m, and 38.3TB of disk storage cost $5m. Application server hardware and software made up the remaining $17.9m of the cost of the Superdome set-up tested.

While clusters are complex and expensive to administer, if Oracle 10g masks some of these administrator headaches, people who might have otherwise gone with big SMP boxes to run their databases might start thinking of clusters.

This article is based on material originally produced by ComputerWire.