Building out a grid compute and storage utility is no easy task, which is why Sun has been building its Sun Grid for more than a year.

According to Aisling MacRunnels, senior director of utility computing at Sun, the server maker has gone through a few different architectures in building out the utility, which is expected to make its public launch in March or April of this year.

The grid lashes together a bunch of servers and allows customers who buy pods of capacity for certain periods of time to run their workloads in a secure fashion on the grid.

Sun’s plan calls for not just commercial customers, who are vetted formally by Sun and who tend to buy large blocks of capacity, to log into the grid, but also to give it a Web interface and open it up to the public, allowing anyone with a credit card and the $1 per CPU per hour to upload work to the Sun Grid. Sun has thousands of customers lined up to use this grid, apparently.

And because of this, Sun has to be very careful about constructing the network that makes up the backbone of the grid. And reality being different from theory, what might work on paper doesn’t always work in the real world.

For this reason, says MacRunnels, Sun has gone through two generations of network backbones, and has picked Force10 as its partner for the latest incarnation of the Sun Grid. (No, MacRunnels would not rat out the two suppliers that are being replaced.)

Secure multi-tenancy is the key to utility computing, and now we can scale more securely, says MacRunnels.

The Force10 machines support up to 1,260 Gigabit and 224 10 Gigabit Ethernet ports. The switches also have access control lists that restrict access to network services at multiple layers in the TCP/IP stack (both switching and routing), which is a big plus for a grid-based shared compute utility.

The Force10 switches also have three processors to drive the switch and built-in fault tolerance. The largest Force10 switch/routers fill a half rack. Incidentally, IBM is a Force10 partner, and supplies some of the custom ASICs as well as the PowerPC 405GP processors that are used in the E-Series switch/routers.

In a separate announcement, Force10 also said that its E-Series switch/routers have been chosen as the cluster interconnect for a 60 teraflops Linux cluster made from 4,000 Xeon-based Dell PowerEdge 1850 servers. The cluster, which sits at Sandia National Laboratories, is nicknamed Thunderbird and it used to have InfiniBand interconnect.