The company designs InfiniBand adapter and switch integrated circuits that are added to servers, storage, communications infrastructure equipment, and embedded systems.
The Yokneam, Israel-based company said its products are incorporated into servers produced by the five largest server vendors: IBM, Hewlett-Packard, Dell, Sun Microsystems, and Fujitsu. It said it also supplies storage and communications infrastructure equipment vendors such as Cisco Systems, LSI Logic, and Network Appliance.
With big companies like these, total sales to customers representing more than 10% of revenue rose from 36% in 2003 to 56% in 2005 but it warned that the loss of one or more of its principal customers could cause revenues to decline materially. However, it said it expects sales to customers representing more than 10% of revenues to account for a decreasing but significant portion of revenue for at least the remainder of 2006.
Since it introduced its first product in 2001, it said it has shipped products containing more than 1.4 million InfiniBand ports.
Mellonex said several IT trends will affect the demand for interconnect products and the performance they will be required to meet. Historically, enterprises needed high-end computing and storage using monolithic systems, based on proprietary components, which were expensive and had high operating and maintenance costs.
But more recently, enterprises have deployed less expensive, high-performance clustered systems with multiple off-the-shelf standardized servers and storage systems linked by high-speed interconnects. In addition, the transition to multiple and multi-core processors in servers has significantly increased its computing capabilities.
With IT managers increasingly focused on reducing the costs of running data centers, there has been a growth in the adoption of compact form-factor blade servers and the use of virtualization software.
Mellanox said I/O bandwidth has not been able to keep pace with processor advances and this has created performance bottlenecks at a time when fast data access has become a critical requirement to accommodate microprocessors’ increased compute power.
It said the increasing use of clustered servers and storage systems has led to an increase in complexity of interconnect configurations, making them increasingly complicated to manage and expensive to operate.
Putting the boot into competitors, Mellanox says that most interconnects are not designed to provide reliable connections when used in a large clustered environment, which can cause data transmission interruption. And it says that most high-performance interconnects are implemented with complex, multi-chip semiconductor solutions which have traditionally been extremely expensive.
Listing the competing technologies, it says Myrinet is a proprietary interconnect that has been primarily used in supercomputer applications but its use has been declining largely due to the availability of industry standards-based interconnects that offer superior price/performance.
While Ethernet is an industry-standard capable of providing relatively high bandwidth, iits overall efficiency and reliability are inferior to certain alternative interconnects.
Fibre Channel is an industry standard limited to storage applications but lacks a standard software interface, has limited bandwidth and remains more expensive relative to other standards-based interconnects.
Mellanox argues that InfiniBand-based interconnects have significant advantages that leave it well positioned to become the leading high-performance interconnect. It is able to provide superior bandwidth and latency and Mellanox host channel adapters (HCAs) provide bandwidth up to 20 Gbs and its current switch ICs support bandwidth up to 60Gbs.
The InfiniBand specification supports the design of interconnect products with up to 120Gbs bandwidth.
While other interconnects require use of individual cables to connect servers, storage and communications infrastructure equipment, InfiniBand allows for the consolidation of multiple I/Os on a single cable or backplane interconnect, critical for blade servers and embedded systems.
It says competing interconnect technologies are not well suited to be unified fabrics because their fundamental architectures are not designed to support multiple traffic types.
Mellanox says InfiniBand products are generally available at a lower cost than other high-performance interconnects and by facilitating clustering and reducing complexity, it offers further opportunity for cost reduction.
It quotes researchers IDCas saying that InfiniBand’s share of the high performance computing (HPC) cluster interconnect revenue has grown from 1.7% in 2003 to 17.2% in 2005. Mellanox believes that the believe the primary driver of InfiniBand product shipments in the near future is increasing usage in server and storage systems.
IDC predicts that of the 7.7 million servers that will ship to the entire server market in 2006, approximately 4% will integrate InfiniBand products. But from 2006 to 2010, IDC estimates that the usage of InfiniBand in servers will increase at a 40% compound annual growth rate, resulting in over 1.1 million InfiniBand servers in 2010.
In addition to servers, Mellanox says storage systems represent another significant opportunity and, according to IDC, shipments of Fibre Channel adapters is expected to increase from 1.8 million in 2005 to 4.7 million in 2010.
Since it first reported revenue of $1.7m in 2003, Mellanox revenue has doubled every year. It turned an $8.9m net loss into income of $3.2m in 2005 on revenue 108% higher at $42m. However growth has faltered in the first six months of this year and, although it managed to turn a $257,000 loss into income of $558,000, revenue rose just 9% to $19.3m.
Mellanox competes against QLogic Corp in InfiniBand products and lists its competitors offering alternative technologies as Marvell Technology Group, Broadcom Corpo, Emulex Corp, QLogic and Myricom.