The UK Computer Measurement Group held a seminar in London last week to address the problems associated with the capacity management of relational databases. As this independent user organisation is known for its Blueish hue, the day was taken up with the discussion of DB2. As the platform was shared by Amdahl, IBM and Teradata the occasion promised lively debate.

Amdahl highlights impediments to DB2 performance, offers remedies

First to get on his feet and speak was Trevor Hackett, database consultant with Amdahl UK, whose talk addressed, among other things, two themes that were to be picked up by other speakers throughout the day: the use of SQL to access databases, and the best management of the buffer pool. Hackett stressed how the use of SQL could influence database performance depending on what type of SQL predicate is used. Ideally predicates must be as precise as possible. For example, if predicates 1, 2 or 3 are offered, the performance results could be horrendous, whereas if predicates between 1 and 3 are offered, performance should be excellent. The important thing is to filter the number of returns between SQL and the database by using a limited number of columns, not a whole table. Hackett suggested that is also useful to remember to use stage 1 predicates, since these restrict the ammount of work DB2 has to do. When it comes to on-line processing Hackett thought that for browsing transactions, the use of CICS TS queues or IMS SPAs is better rather than going back to the database. He also believes that thread re-use should be considered for high volume transactions in CICS because this saves precious CPU resources. Another tip he offered was to put checkpoints in when batch processing, as this helps performance, particularly if you are using a 15-hour MRP package, because it helps to eliminate the time needed for back-up. Hope should be taken from Hackett’s belief that every release of DB2 will see the buffer pools increased.

BGS survey finds that IMS and DB2 are strikingly similar in performance

Guy Bullimore of BGS Systems Ltd revealed the results of a survey his company had conducted considering the relative merits of migrating from IMS to DB2. These will be of interest to those data processing departments that want to move to DB2 for strategic reasons but need to justify the hardware costs of such a move. The findings were remarkable in that the conclusion was drawn that DB2 and IMS HIDAM subsystems structures are strikingly similar and produce similar performance for record-level processing in a high volume transaction processing environment. DB2 does offer a higher throughput and faster response time largely because of its asynchronous updates and inserts, but in all basic program requests, DB2 required between 1.5 times and 2.5 times the amount of CPU that IMS used.During the Q&A session at the end of the conference, the panel was asked if it was more cost-effective for an IMS user to wanting more transaction processing throughput to buy a database engine than to migrate to DB2? An amused Gostick replied that this depended on the size of the problem and the size of the application, a less amused Campbell said that the IBM rule of thumb was to use DB2 – both agreed that those wanting higher throughput will get it from a database engine, while those wanting the shortest possible response time should stick to the mainframe.

Database engines form natural bridge between SAA and Unix applications

Dr Robin Gostick of Teradata was keen to emphasise that his company was not the only one peddling database engines, commenting that Oracle will be releasing Oracle on the Ncube processor later this year. Furthermore, he believes that Oracle and Ingres tools will soon both be able to access theDBC/1012. Gostick thinks that the era of the database engine has arrived because while it used to be the case that parallel processing required the rare skills of a programmer versed in a parallel processing language, nowadays trendy SQL is all that is required. Gostick argued that SQL is the best programming langua

ge for parallel processing, since it simply asks do this task for me somehow, so database engines can handle programs written in any language via SQL. Machines such as DBC remove the database part of the application from the mainframe host and offload about 99% of the work leaving the mainframe to do the general purpose part of the applications. With complex applications, Gostick admits that the savings are not so great. But he explained that DBC’s forte lies in processing large decision support applications where there is a rapid payback immediately for the user in terms of least disruption to the organisation. Indeed, Oracle on Ncube is believed to provide split processors so that some can be dedicated to transaction processing and some to decision support systems. Gostick also argued that a database machine is the best bridge between SAA and Unix as the database is accessed by both environments freely and then can choose the best place for the application. This will undoubtedly form part of Oracle’s Ncube sales pitch and is unlikely to gladden IBM’s heart. Meanwhile, since he was talking Unix, Gostick said that AT&T has 43 Teradata systems which it is using for full-scale production work running Unix and using Tuxedo as a front end. Far be it for Computergram to point out that Teradata also has a very close relationship with a certain NCR Corp – it makes you think.

IBM says capacity managers can blame themselves for poor DB2 performance

John Campbell of IBM UK Ltd was at pains to explain that DB2 performance reflected correct capacity managment. Users may like to know that IBM offers a DB2 SupportPac that states simple ROTs – for the uninitiated that is rules of thumb – on the use of buffer pools, how predicates are managed and so on.