Big Blue has long been enthusiastic about the SP benchmarks, and has already published a string of results from tests of its mid-range and high-end disk arrays. It owned the previous record for the SPC-1 test. SPC-1 was designed to simulate random OLTP-style workloads, as distinct from the sequential workloads of SPC-2.

SPC administrator Walter Baker said that the test involved by far the most complex set-up yet tested the SPC. The largest 8-node version of the SVC was used to virtualize the capacity of 16 DS47000 arrays, with 80 expansion enclosures, totaling over 1,500 disk drives.

The price for the entire set-up – list less what the SPC says is a realistic field discount – was $3.3m. With that forest of drives at hand, some might suspect that IBM tested an short-stroked racing machine and not something representative of a real-world system. Not so, according to Baker.

For the SPC-2 test, IBM came close to 97% disk capacity utilization, and for the SPC-1 test, it reached around 50% disk utilization including mirrored data, Baker said.

So they short-stroked on SPC-1 a little, but it didn’t make a huge difference, and it saved a huge amount of reconfiguration, he said. According to Baker, what IBM tested counted as a real world system because it is not uncommon for vendor to recommend 50% utilization to their customers precisely to maintain performance.

The SVC is an in-band virtualization system, and IBM has in the past claimed that its cache helps deliver faster performance than unvirtualized disk arrays. Is that cache the reason for its record SPC performance? Baker again defended IBM, arguing that the SVC cache is smaller than that seen in some high-end disk arrays.

He added: We had 100TB of [simulated workload] data, 90% of it active. How much of that can you cache? And anyway the benchmark’s designed to move the hot-spot around.

The only major storage hardware supplier not to have published any SPC test results is EMC. In the past EMC has said that it is not scared of the measuring stick, but has argued that benchmarks are no indication of real-world performance. Yesterday the company went a little further and accused vendors of tweaking test systems on the sly.

To maximize performance vendors can for example turn off features and functions that would never be turned off in a real world application. When quoting SPC results, vendors do not have to indicate what they disabled. Vendors can turn off all RAID functions, turn off all HA functions, turn off all data integrity and other functions in order to look good, EMC said in an email to Computer Business Review.

Baker’s keen reply was: All the tests so far have used RAID 5 or striped mirroring. In a couple of cases where write-caching was disabled, it was called out. Even if data protection were disabled, vendors would have to say so. That’s why we call our reports full disclosure reports.