From our sister publication Computer Finance.
Database software has been central to the development of client- server computing. Indeed, the first manifestation of client- server came from the separation of front-end and back-end processes with the growth of the market for relational databases in the early 1980s. So what has happened to pricing since then? Although the relational database has brought undoubted technological benefits, it has complicated the costing structures of database software. Like the technology itself, costs have become ‘distributed’ and more difficult to itemize and control. In place of the monolithic costing structures associated with earlier database technologies, client-server relational database systems are often based on concurrent user pricing at the server level and additional – often hidden – costs of connectivity and client software. Put simply, in the pre-relational era you bought a database management system (DBMS), paid a license fee and that was it; in the client-server relational era, licensing the server is only the beginning of what can be an expensive ongoing commitment. Meta Group notes, for example, that the full five- year cost of ownership of enterprise DBMS can turn out to be much higher than expected once support and maintenance costs are added in. It cites the example of Oracle, where costs can rise from $1,200 – $1,400 a seat to as much as $2,000. Proposed new technological developments – such as object-oriented computing, middleware, data warehousing, Internet connectivity and so on – complicate the costing structures of databases even further.
The changing face of DBMS
The infrastructure of the DBMS market is changing – with a shift away from traditional revenue streams. In the past two years, for example, DBMS vendors have seen revenues from server sales decline and revenues from other sources such as middleware and application development tools increase. This change is the direct result of server technology becoming increasingly ‘commoditized’. Users now have many different routes to DBMS:
* Traditional mainframe DBMS geared to transaction processing.
* Traditional client-server relational DBMS geared to data retrieval.
* Workgroup computing products such as Lotus Notes and Oracle Interoffice.
* Proprietary middleware options such as Peer Logic’s Pipes or TechGnosis’s SequeLink.
* Internet or Intranet based communications products.
The range of options presents both a challenge and an opportunity to enterprise users. The challenge comes from unraveling the bewildering array of choices then uncovering the costing structures associated with them. The opportunity comes from the inevitable competition in a broader supplier base and from a market that gives users a stronger negotiating position. Vendors are tearing up their price lists in pursuit of market share and well-informed users with aggressive purchasing strategies are the beneficiaries. Recent examples of this trend are:
* IBM’s decision to slash the price of its DB2/6000 Unix workstation DBMS from $849 to $369 and the five-user server version price from $3,199 to $1,595.
* Oracle’s decision to remove user number restrictions on its InterOffice Workgroup (IOW) bundle – providing a low-cost route to Oracle’s server technology.
* Sybase’s emphasis on selling DBMS on short-term lease or rent agreements to eliminate capital expenditure.
* Microsoft’s aggressive entry into the enterprise market with SQL Server and BackOffice – both geared to lower mass market pricing structures.
All of these moves are evidence of an increasingly competitive database market with new rules.
Vendor strategies
Vendors have reacted in a variety of ways to the changes in technology and the potential emergence of Microsoft as a player in the market. As noted above, they have recognized that they must bring the up-front cost of DBMS server software down and bring the cost per seat into li
ne with Microsoft’s aggressive pricing strategy. The result is that the market is becoming increasingly segmented along the lines of the two main approaches to distributed client-server computing:
* A low-end ‘fat client’ workstation/PC market with low-cost products and low levels of support (that is, the mass-market consumer model Microsoft has exploited so successfully with its operating systems).
* A high-end large-scale ‘fat server’ application market with more expensive products and high levels of customer support (the traditional model espoused by IBM).
This segmentation highlights two different vendor approaches – or as one leading DBMS vendor noted acidly: Microsoft sells to a market, we sell to customers. Most medium-sized or large user organisations will have to deal with both these approaches and the trick will be to learn how to distinguish between the two. DBMS vendor revenues used to split 70:30 between server licenses and services – it is moving towards 50:50. Researchers at Meta Group view the market segmentation in terms of two distinct strands of application development. On one hand, companies must consolidate their operational transaction processing oriented databases into an integrated whole based on the expansion of traditional models. On the other hand, the emergence of what Meta Group calls informational databases – which are geared to new applications for customer support and marketing – are more likely to be driven by the cheaper systems. Meta’s argument is that the cost of operational databases is dominated by the need to apply integrity and concurrency constraints, while the cost of informational databases is dominated by ‘modeling and moving data in a dynamic query-driven application environment’. Meta notes that these two strands of development need different approaches to cost reduction and quality improvement. It points out that, although consolidation of operational databases will cut networking costs, additional costs will be incurred from the need to harmonize database views at the conceptual level. Reconciling decentralized data models will be both technically and politically challenging, but the effects will be far greater implementation flexibility, production application stability and system evolution potential, it says.
Cost impact low-down
The shake-up in the DBMS market has significant cost implications. This does not mean that costs will necessarily fall. Overall they will probably remain stable. But they will shift. Vendors can expect server software revenues to continue to fall; middleware, support and ‘service’ incomes will rise. There are a number of factors behind the change – some technological and some political. The technological factors are rooted in the evolution of the client-server model of computing and the growth of networked systems. The separation of front-end processing functions from data management functions – which began with the rise of the relational DBMS – has spawned numerous approaches to distributed computing. In the era of the centralized mainframe, such solutions to distributed computing – which are manifest in object-oriented computing, workgroup computing and, more recently, network computing – would not have been possible. These technology advances have varying degrees of merit. But alongside the undoubted technological virtues, they also share a need for increased skill levels within user organizations across the board and additional software to make them viable. Many users now have multiple DBMS. Primarily, they need integration technology. This can come in many forms – from simple mechanisms that control SQL requests to full-function ‘middleware’ and data warehousing products. Inevitably, this will mean more investment in new infrastructure, new skills and new products.
Adding a political dimension
In addition to the change in the technological infrastructure there is a political sub-text to client-server. This can be best understood in terms of the continuing conflict between
the traditional IBM/mainframe data processing model and the new wave of Microsoft personal computer model. A year or two ago it seemed as though the war was won with the Microsoft model emerging as the victor. However, in light of the realization that downsized client-server systems did not cut costs as expected – coupled with the unexpected emergence of the Internet and network computer-based systems – the conflict has entered a new dimension. The new battles are over who has control of server systems and Microsoft has moved aggressively into markets dominated by IBM, Oracle and others. The conflict maps onto client-server technology in terms of thin client/fat server (the IBM model) and fat client/thin server (the Microsoft model). The initial attractions of the ‘Microsoft’ model were the apparent cost reductions that came from distributing power and function to the desktop. As it turned out, the costs did not fall at the center, they simply shifted from it and often increased. Certainly distributed client-server computing meant that companies spent less on hardware. But integration and support costs – as has been discovered – are much higher. Research from Xephon, for example, has shown that the total costs associated with personal computers connected to local area networks (LANs) turn out to be significantly higher. Xephon projects that the five-year cost per end user in 1997 will be about $9,000 for a mainframe compared with $14,000 for a LAN-based PC.
Middleware
Customer reactions to the changes in the DBMS market and its cost structure will depend on a number of factors. Chief among these is the need to establish a new infrastructure that can accommodate both the traditional centralized model (often referred to as legacy systems) and the distributed model. DBMS vendors spotted the key to the new infrastructure some time ago; it is middleware. That ‘middleware is central’ to integrated infrastructures is more than a weak pun. That it will cost money is inevitable. And that cost-benefit justification of middleware is difficult – if not impossible – is a given factor. Chris Stone, president of the Object Management Group (OMG), noted last year that: Middleware is what everyone wants but no one wants to pay for. Middleware has evolved from a variety of approaches to infrastructural software – some familiar and some novel. In essence, middleware is an attempt to create a general-purpose infrastructure for applications. It aims to provide a common set of resources which may be used as a foundation for applications. Familiar middleware technologies include:
* Transaction processing monitors.
* Relational database management systems.
Novel middleware technologies include:
* Object-oriented computing.
* Standard ‘open’ computing models such as the Open Software Foundation’s (OSF’s) Distributed Computing Environment (DCE).
* Messaging middleware such as IBM’s MQSeries.
* Proprietary ‘middleware’ such as Peerlogic’s Pipes and TechGnosis’s SequeLink.
Many companies will use combinations of the above to create an infrastructure for distributed client-server systems. Inevitably, they will need to confront issues of compatibility between the various middleware elements and take account of the long-term cost-benefits.
The transaction processing view
Traditional transaction processing based on products such as IBM’s CICS provides a useful model for the kind of middleware infrastructure needed to support distributed client-server computing. It has evolved from a conventional centralised control model to a distributed model. Transaction processing (TP) has been the mainstay of mainframe data processing systems for the past 25 years. The need for a large central processor, which can service hundreds – or even thousands – of terminals, coupled with a need for complex software to control the flow of transactions through a system, has characterized TP as a resource-heavy mainframe application. But this is changing. The technology of PCs and local area networks is now powerful and resilient enough for it to be a viable alternative to the traditional central mainframe for many classes of application. At the same time, the complexity of applications for PCs has grown and has begun to demand the same levels of system integrity associated with established mainframe TP systems. Workflow applications based on software such as Lotus Notes, for example, needs features such as security, data integrity and disaster recovery as much as any mainframe TP application ever did. As might be expected, IBM has been very much in the forefront of taking TP from the central mainframe model to client-server. Over the past decade it has evolved its CICS TP software to work on both Unix-based ‘RISC’ machines such as the RS/6000 and its OS/2 range of PCs. CICS OS/2 started out as a process that could spread to every PC on a local area network, and has moved on from there. It brings two technologies together in the client: one to provide traditional terminal emulation and the other to offer an external call interface (ECI) for processing across the network. It works in a similar way to a remote procedure call (RPC) and enables a process in one machine to be initiated from another. This is an important feature of distributed TP systems and goes far beyond the simple communications used in terminal emulation. The advantage of a product such as CICS OS/2 is that it fits in with a large mature family of systems software products that can all offer the same API. Theoretically, a mainframe CICS application can run on a PC under CICS OS/2 or on an RS/6000 Unix machine with CICS/6000. The development of applications based on transaction processing has spawned a rich support structure which serves as a viable model for middleware – both for development and for long-term maintenance. This is doubly appealing to established IT departments because it is both familiar and mature. IBM has adopted a similar approach to the evolution of its DB2 product range. Taking the mainframe as a starting point, it has ‘scaled down’ the product to run on mid-range Unix platforms and PCs.
The database view
While TP systems offer a comprehensive – and expensive – model for infrastructural middleware, DBMS provide a simpler and cheaper alternative. DBMS also offers some advantages when it comes to maintenance. The technological simplicity of a client- server DBMS with Structured Query Language (SQL) as the glue to join client and server processes makes it an easier model to maintain. But DBMS also has limitations. For one thing, DBMS solutions tend to be proprietary – despite their claim to ‘open’ standards. For another, the DBMS view is skewed to a datacentric view of the universe – which might not be appropriate for emerging applications based on concepts such as workflow processing. DBMS vendors have made some progress towards coping with distributed processing in the form of stored procedures and variations on the RPC mechanisms offered under OSF’s DCE. But their priorities still focus very much on the storage and management of data. The result is that while DBMS middleware offers a short-term solution to building a new infrastructure, the long-term costs could well be significantly higher than biting the bullet and adopting a broader solution based on transaction processing or messaging.
The messaging view
The most visible and, currently, the clearest example of middleware comes under the heading of ‘message-oriented’ middleware (MOM). According to Ovum this is the area of the market that is likely to see the largest growth in the next five years. And by the year 2000 it is expected to represent about 23% of the market for middleware products. MOM is in many ways the purest form of middleware. It uses the concept of a message to separate processes so that they can operate independently and, often, simultaneously. This means, for example, that a workstation can send a request for data that needs to be collected and collated from multiple sources, while continuing with other processing. This form of so-called asynchronous processing enables middleware to provide a rich level of connectivity in many types of application. It might be a simple message to download some data from a database server or it might be an advanced electronic mail system with built-in workflow processing. In many ways message-oriented middleware is similar to electronic mail systems such as Lotus cc:mail. But although it uses similar mechanisms and can, indeed, provide the foundation for electronic mail, there is a key difference. Electronic mail passes messages from one person to another whereas MOM passes messages back and forth between software processes. There are clear advantages in the use of MOM in many modern applications. It provides a relatively simple API – making it easy for programmers to develop the skills they need to use it. MOM offers an economical set of commands – usually no more than 20 – and it can be equated to reduced instruction set computer (RISC) chips in its simplicity. The flexibility of the API also extends to so- called ‘legacy’ applications so that organisations can introduce distributed computing gradually without incurring a massive re- programming load.There is of course a price to pay for MOM’s simplicity. One of the main problems with MOM is that its function is restricted to message passing. In other words it does not include facilities to convert data formats. If, as in many systems, data is to be transferred from mainframes to personal computers, the data conversion form EBCDIC to ASCII formats must be handled elsewhere. The message-oriented middleware software only provides the transport and delivery mechanisms for messages – it is not concerned with the content. This adds to the complexity of the application that must take responsibility for creating and decoding messages. MOM’s simplicity also prejudices performance because messages are usually processed from a queue one at a time. The problem can be solved by running multiple versions of the message processing software – although this is generally thought to be a less than elegant method. This particular problem means that MOM is not usually suitable for applications that require ‘real time’ communications within applications. The leading MOM products inevitably come from the established systems suppliers – with IBM and DEC having the highest profile. IBM’s MQSeries – originally developed for IBM’s main platforms (mainframe MVS, OS/400, and AIX, IBM’s Unix) now supports a wide range of non-IBM hardware platforms (Sun Solaris, Tandem, NCR and others) which has raised its profile. It accommodates all the major computer languages (Cobol, C, Visual Basic) and network protocols (SNA, TCP/IP, DECnet, Novell IPX). Front-end client support covers Microsoft Windows, MS/DOS and OS/2. MQSeries goes much further than many MOM products in providing support for transactional messaging with all of the associated benefits this brings. This includes features such as two-phase commit mechanisms, security, and restart and recovery which would normally be found in transaction management software. Significantly, IBM sees MQSeries as a major infrastructural technology for important emerging application areas such as electronic mail, workflow and its implementation of object- oriented computing. DEC’s DECmessageQ also supports a wide range of other manufacturers’ operating systems. In addition to the company’s proprietary DEC VAX VMS and Alpha platforms, DECmessageQ covers the leading Unix implementations (IBM, Sun, Hewlett-Packard, SCO) and Microsoft’s Windows environments. Languages support extends to Cobol, C, Fortran and ADA, as well as Microsoft Visual Basic and C++. DEC also includes a wide range of queue processing features designed to facilitate systems management. Future plans will enable DECmessageQ to support multiple APIs and the introduction of standards for formal message queuing as they emerge from standards bodies.Among the third-party suppliers, Peer Logic Inc’s Pipes is one of the leading contenders – again supporting the main platforms from IBM, DEC and Hewlett-Packard. And again, the major languages and network protocols are supported.
The object view
Object-oriented technology is so fundamentally different to existing methods that many organisations are holding back on implementing it. The effort and cost involved in the massive re- engineering load associated with moving to object-oriented technology is a disincentive. Moreover, the continued presence of traditional development methods such as Cobol – which do not map well onto the object-oriented model – suggest that it is not likely to move up the agenda for some time. But there is no doubt that, potentially, object-oriented technology offers the most promising solution to building an infrastructure for distributed computing. Object-oriented technology provides a means to control the fragmentation of systems caused by client-server computing. It starts from the premise that software applications can be constructed from ‘a kit of parts’ and provides the mechanisms to integrate them into a coherent system. It follows that ‘part replacement’ – in other words maintenance – becomes a planned and managed activity within an object-oriented environment. In the long term the mechanisms for code re-use and replacement, which are an inherent part of the object-oriented method, go a long way towards reducing the cost of infrastructural maintenance. It seems likely that most companies will use a combination of middleware products – with traditional transaction management sitting alongside object-oriented computing or messaging middleware. This approach enables legacy systems to be migrated slowly to new structures without the costs that would be associated with a wholesale shift to a new technology such as, for example, object-oriented systems or data warehousing.
Database pricing futures
The apparent fall in the price of DBMS server software and cost per user hides a fundamental shift in infrastructural costs. Increasingly organisations faced with multiple DBMS products at all levels of IT will require integration technology and the skills to deploy it. Middleware is without doubt the key element in this equation. It is the foundation of all future technological evolution in networked systems whether in an ‘operational’ context, an ‘informational’ context or combinations of both. Middleware is, however, notoriously difficult to cost justify. No obvious benefit can be attached to what senior managers will view as nothing more than an extra layer of systems software between the operating systems and the applications. IT managers must resist the temptation to take the easy option when it comes to building an infrastructure. The straight cost of DBMS is no longer the yardstick which can be used to measure application infrastructure costs. There is a need to consider middleware, systems management, data warehousing and the technical skills needed to deploy a complete solution based on a combination of all of these and DBMS.