ccNUMA, or distributed shared memory systems where each CPU in the system has access to the main memory on every node in order to maintain coherence across the system, is one step beyond traditional clustering (see SynFinity in Top Stories section). cc – cache coherence – is provided through the system bus as the system has to be able to see cache transactions in order to provide system-wide coherency. It can’t be done unless the ccNUMA interconnect sits where the processor talks to memory. With clusters, individual nodes usually plug into an I/O bus such as PCI. These buses are insulated from processor memory transactions and the cache cannot be seen through the PCI bus. Shared memory across a network is different from cache coherence. Each of the nodes understands the memory space of the other nodes enabling the system to perform operations which appear like a write to memory across the network, rather than just telling the NIC network interface card which node to send it to. It provides low latency but not cache coherency. Shared memory also provides a superset of what is required to do message passing. Most ccNUMA support message passing too – also on NT since Wolfpack – though observers remind us that the concept in itself is hardly new; BBN’s Butterfly architecture supported SCI interface in the 1980s. ccNUMA critics, including Sun Microsystems Inc, say the technology is fundamentally flawed as it introduces a latency, or speed bump, into the SMP shared memory model of computing.