Sign up for our newsletter - Navigating the horizon of business technology​
Technology / AI and automation

DEVELOPERS GET TO GRIPS WITH ULTRA FAST NETWORKING SUPERCOMPUTER BOTTLENECK

It doesn’t seem so long ago that Ethernet was regarded as an esoteric and expensive solution looking for a problem. Where, people asked, was the market for all that bandwidth – even taking into account the fact that the effective throughput of typical Ethernets is limited by the performance of communications software to perhaps a tenth of the nominal 10Mbits per second? How times have changed. Now, many are talking about the 100Mbps FDDI Fibre Distributed Data Interface fibre optic network becoming as standard a feature of graphics workstations as Ethernet is today. But even this order of magnitude improvement over Ethernet will not satisfy the needs of the high end of the scientific and engineering market – a market that until quite recently consisted of small numbers of Cray or other supercomputers and large mainframes. Now, with minisupercomputers spreading among not only existing supercomputer sites but also into facilities whose budget would previously encompass only VAXes, the throughput problem is affecting many more users. And the latest technology to arrive on the scene, the Personal Supercomputer, promises to exacerbate the problem still further. Eat it up Each time scientists and engineers are provided with extra power, they rapidly learn to eat it up by automating more and more complex tasks – and in the process are likely to expose bottlenecks in other areas of their computing resources. Relatively slow links between the user and the supercomputer were acceptable only so long as users were conditioned to accept the fact that supercomputer power was only available rarely, at great expense and at distant computer centres. Now, users may expect to be able to use minisupers interactively; and in turn, the demand is growing for much faster links either directly to Crays or between minisupers and Crays for those problems that are just too big for the minisuper to handle. In addition, the spread of graphics technology enables researchers in scientific fields to raise their sights again, and they expect to be able to shuffle huge amounts of image data between machines – in real time if possible. Accordingly, most of the minisuper and supercomputer manufacturers are already working with one or more of the specialist networking companies that are developing very high speed communications products to address these requirements.

One of the most widely backed developments in the area is not a networking standard at all, but a means of providing a standard equivalent to the high-speed point-to-point interfaces already offered by Cray Research, Convex Computer Corp and others. The High Speed Channel standard is being drawn up by ANSI committee X3T9.3, and in its initial form defines a copper cable connection supporting 800Mbits-per-second – to be extended to 1.6 Gbps in future. An indication of the interest in the standard is that IBM, DEC, Cray, and Convex are all said to be involved in its use or definition. However, the use of copper cable is a prime reason for one of the limitations of the High Speed Channle standard, the 25 yard limitation on its distance – and Integrated Photonics Inc of Carlsbad, California is developing products that extend the use of the standard to fibre optics, enabling links up to 1,000 yards in length. Integrated Photonics, a subsidiary of fibre optics specialist Pacan Corp, is offering its Toplink HSC products as an integrated package including a laser transmitter, fibre optic receiver, and an ECL gate array formatting chip capable of handling up to 450Mbits-per-second. Two of these formatting chips work in parallel to provide the High Speed Channel bandwidth, and up to six could be used altogether, providing a maximum of some 2.4 Gigabits per second, according to Mathieu Van Den Bergh, director of marketing at the company. Although the standard is primarily designed for direct machine to-machine connections, Van Den Bergh says that it is possible to have multiplexed links that can be shared by several machines, and that some large companies are also in the process of adapting the products t

o support limited topology networks. The product is still expensive – around $2,000 to $2,500 for the basic components in quantities of 50 to 100. A complete package to link a Cray to an Alliant Computer Systems FX would be likely to set you back $20,000 or so – but Van Den Bergh claims that the Integrated Photonics connection still works out at the lowest price per megabit throughput on the market. He envisages the products being used to allow graphics workstations to act as terminals to supercomputers for real time display of images in applications requiring simulation or animation – and as he points out, the throughput is not needed only at the high end. Even a cheap graphics display with 1,024 by 1,024 pixel resolution could require hundreds of megabits of data per second to provide an acceptable refresh rate. Complexity Meanwhile, other companies are addressing the problem of true networking at the gigabit per second level, and in the process having to cope with the complexity involved in getting protocols to operate at these speeds. Greg Chesson, a former Bell Laboratories researcher who is now Chief Scientist at Silicon Graphics Inc, is acknowledged as one of the key developers in the area of increasing networking performance by encapsulating communications protocols in hardware; the work in which he involved will be covered in more detail in a future article. San Jose company Ultra Network Technologies Inc has adopted an approach that in some respects is similar to that outlined by Chesson, and is planning a series of networking products using high-speed buses and Open Systems protocols embedded in hardware. Ultra is keeping its cards close to its chest ahead of a launch planned around May 1988, but the products are said to comprise a network of hubs, each of which uses a high-speed backplane called the Ultrabus. Connections to host machines are via adaptors, which are said to be planned for the Cray HSX channel connection, VMEbus, and FDDI; each adaptor is expected to include a chip set that runs the Open Systems Interconnection protocol stack, ultimately providing throughput of up to 1Gbits-per second. But the Open Systems protocols are general-purpose, large and complex: doubters are already saying that not only will Ultra’s solution be expensive – some $10,000 to $15,000 for a board set supporting the protocols – but also that initial implementations will provide a far lower bandwidth, perhaps as little as a tenth of the 1Gbits target.

White papers from our partners


This article is from the CBROnline archive: some formatting and images may not be present.

CBR Staff Writer

CBR Online legacy content.