The key enabling technology for Pyramid Technology Corp’s RM1000 Cluster Server configuration is a new board that enables the CPU bus in Nile symmetric multiprocessing or clustered nodes to plug directly into the massively parallel Reliant RM1000 mesh interconnect. Previously, nodes had to be attached over SCSI cabling. In conjunction with Pyramid’s ‘virtual disk’ technology, it dispenses with the requirement for discrete input-output subsystems and enables Nile systems to send messages through the 320Mbps mesh in parallel without CPU intervention, from sender’s to receiver’s memory. Nodes can be wired up to 80 feet apart now; Pyramid is talking with two firms about adding fibre links to support greater distances. The overall effect appears very similar to Tandem Computers Inc’s ServerNet technology and from a high-level perspective it is, though there are fundamental differences at the low level, Pyramid advises. Parent Siemens Nixdorf Informationssysteme AG is working on an add-in board due later this year that will enable its RM6000 symmetric multiprocessing servers to tie into the RM1000. New Nile and RM600s will come pre-configured – existing systems will use a backplane slot to house the $25,000 board. Another key advantage of the RM1000 Cluster Server system is that Pyramid will bundle its Oracle lock manager free of charge with the merged Reliant Unix operating system, previously an additional $20,000 item. Furthermore the company’s Very Large Cache mechanism now enables users to allocate large amounts of cache to particular query tasks. It is actually cheaper to build clusters using RM1000 than creating shared SCSI Nile nodes, Pyramid claims. The RM1000 can theoretically house up to 36 cells, where each cell can support six processor nodes, 24 disk nodes, cooling systems, four internal and four external SCSI buses and six Ethernet interfaces. Each processor node is either a single Reliant RM1000 processor node or Nile symmetric multiprocessing node. Each single Reliant RM1000 node supports one R4400, 1Mb or 4Mb cache, up to 512Mb main memory, two SCSI buses and an interface to redundant mesh networks, Ethernet and cooling fans. Each Nile node has from two to 16 R4400s, 4Mb Level 2 cache, 4Gb main memory an interface to redundant mesh networks, power and cooling. Each disk module has 2Gb or 4Gb capacity. Although the design will support more, Pyramid says that 16-way clustering is as high as it will go for now, believing that will outpace most of the competition. The rationale for RM1000 Cluster Server is that it maintains the symmetric multiprocessing programming model on Nile nodes, but at the same time enables parallel queries to be conducted across massively parallel and symmetric multiprocessing nodes for data warehouse and decision support- type applications, alongside transaction processing applications that require fast and repeated access to smaller amounts of data. Massively parallel vendors have tended to shy away from transaction processing configurations because of the poor resilience of SCSI cabling.