VMware’s decision to adopt per-socket pricing on dual-core systems means that companies who want to virtualize their server platforms will not pay a licensing penalty when they move from single-core machines, which dominate the market today, to dual-core machines, which will dominate the market by this time next year.
In effect, VMware has decided to make it up in volume, as the old saying goes, and this straightforward approach as well as the rapid adoption of virtualization technologies on servers (and probably soon on desktops and workstations) should make it a lot easier for VMware to sell its wares.
So far, IBM has been treating a core as a processor in its zSeries mainframe line as well as in the Power-based iSeries and pSeries server lines, and given that IBM charges for operating systems and middleware software on these three platforms based on processor count, you can see why it would decide to do that. But on its X86-based xSeries servers, IBM is sometimes pricing based on sockets, not cores. If IBM had decided to set the example from the beginning to price by socket, not by core, this issue might have not been an issue at all.
But, alas, IBM didn’t do that back in 2001. And when Sun tried to call the UltraSparc-IV processor a single processor back in September 2004 (rather than dual-core processor) no one listened, and some people laughed. However, Sun decided at that point to price its software – at least that software that is priced on a per – CPU basis rather than on employee count, as its middleware is priced – on the number of CPU sockets in the box, not the core count. And that was similarly generous and the laughing stopped.
AMD and Intel followed suit, and then Microsoft did, too. And now, VMware. Oracle has created a pricing scheme for its 10g database that counts the cores in the box, multiplies by 75%, and then rounds up to find the per-core price; the net result is a discount of 50% for single-core, two-socket boxes, and 25% for other machines. This is not even close to saying that Oracle is core neutral, except on the single-socket machines.
While VMware should, like the other software vendors that are core neutral, be commended for this action, the one thing no one has said is that they will remain core neutral from here on out. And Computer Business Review argues vendors will not be able to do this, no matter how good their intentions are.
If you look at the chip roadmaps, you know that Sun has an eight-core Niagara chip coming, followed by a much more heavily cored Rock processor. Intel and AMD are working on four-core variants of their processors, IBM’s Power6 might have four cores. Chips with even larger numbers of cores are coming down the pike from these vendors, too, with maybe 8 or 16 cores on a single chip, and someday even more.
Rather than use Moore’s Law to shrink the chip and thus crank up the clock speed on the processor to boost performance, we are adding processor cores and praying like hell we can get all of our workloads to be multithreaded. It’s a fair bet, considering the alternative, which is having servers so hot they are, for all intents and purposes, a step backward into the old days of water-cooled mainframes.
But the effect of this is to put software pricing on the same Moore’s Law curve as hardware pricing has been subjected to for four decades. And there is no way that vendors want to do that with their software pricing – of this, you can be sure. It is hard to believe that VMware will be happy to sell a license to GSX Server or ESX Server for 1/4th, 1/8th, and then 1/16th the price it offers the software for today as chips with 4, 8, and 16 cores roll out into the market– unless the market for its software grows a lot faster than it is doing right now. What holds true for VMware holds equally true for any other software vendor and any other kind of software.
This brings us back to the same point made when Oracle announced its multicore pricing scheme a month ago. The core-neutral pricing, while generous, does not in any way clear up the issue surrounding the use of virtual machine or logical partitioning on servers and desktops. With partitioning, a fraction of a processor can be dedicated to running a stack of software. If you can prove a job is isolated in a static partition, vendors will charge you accordingly, counting cores or sockets.
But the whole point of virtualization is to be dynamic as well, to change processor resources over time. So, decimal math is a necessity (since a fraction of a processor can be allocated to a workload) and so will some factor to take into account how partitions change over time. An equitable software pricing scheme has to take into account usage over time, not just a static snapshot or a theoretical maximum within a partition. Software pricing should not negate the flexibility gained from partitions, but rather go with it.
And thus, any sane person would conclude that what the software industry needs is to define a quantum of computing power that spans server architectures and operating systems. Then, the industry needs to create and adhere to a unified software metering system that can be installed on any machine and be used to meter the usage of software. We need to define the information equivalent of what a watt is to electricity, what a gallon is to water, what a meter is to length, what a minute is to a phone call.
If we can’t do that – and there is good reason to believe that the many parties who would have to agree to such a thing will never do so – then software will have to be priced based on some metric that is external to the systems that run it, such as employee count or revenue count, as Sun has done with its middleware stack.