While clustering up RISC/Unix servers to create parallel and massively powerful machines is old hat and the use of relatively inexpensive Lintel and Lopteron clusters is common, no matter what kind of supercomputer you create, to make human use of that machine means turning terabytes of data into pretty pictures that represent whatever phenomenon you are modeling in the simulations inside the supercomputer.

Up until now, scientists who wanted to make such pictures to analyze models had to either move a subset of the data to their workstations or be sitting in the same physical location as the supercomputer cluster to have it draw those pictures. With the Maverick Terascale Visualization System developed by Sun and the University of Texas at Austin and funded by the National Science Foundation, researchers have created a separate visualization server that can take the models from supercomputers and do the necessary graphical rendering to illustrate sophisticated computer models and then pump the resulting pictures in real time over a secure Internet link. In other words, researchers no longer have to be in the same facility as the supercomputer to do their work.

In effect, Maverick is the frontal lobe of the TeraGrid project, which in 2002 was funded with $88 million to create a 20 teraflops supercomputing grid that is spread out across the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the San Diego Supercomputer Center at the University of California, Argonne National Laboratory, the Center for Advanced Computing Research at Caltech, and the Pittsburgh Supercomputer Center. Funding for TeraGrid comes from the US National Science Foundation and private partners including IBM, Sun Microsystems Inc and Intel Corp.

Maverick is not a particularly big machine, at least not compared to the supercomputers it can render visualizations for. It is comprised of a single Sun Fire Enterprise 25000 server using 1.2GHz dual-core UltraSparc-IV processors with 512GB of main memory and rated at a mere 265 gigaflops. However, that is a large amount of main memory for storing graphics data and the E25K has a very high memory bandwidth, so it can coordinate the movement of data at memory speed to the eight high-powered graphics cards that are lashed together and working in concert to render images. The system runs Sun’s Solaris Unix variant, of course, and the graphics cards are running in Linux boxes attached to the E25K through a high-speed network. The E25K has a special tweaked version of Sun’s Grid Engine software that allows researchers to dispatch visualization jobs to this machine over the Internet.

Sun is certainly not the first company to figure out that you should gang up a bunch of 3D workstation video cards together to make a visualization box. In July 2003, SGI Inc announced a similar visualization machine called the Onyx4 UltimateVision system, which can cluster from 2 to 32 high-end graphics cards to create what is in effect one giant and very powerful multipiplined graphics card. But Sun claims to be the first company to allow remote access to such a visualization system.