by William Fellows
Senior managers at Sun Micro systems’ headquarters have res ponded to some of the charges laid at the Mountain View, California-based company’s feet by the Open Software Foundation over its Open Network Computing system in a continuing debate over the future for a standard method of doing distributed computing in Unix. Although Open Network Computing enjoys an unquestionable lead in terms of installed base, a big question mark that hangs over it – and one constantly raised by the the Foundation – is whether the Remote Procedure Call element of the technology is actually being used at Open Network Computing sites. Remote Procedure Call is the crucial enabling element of distributed computing, and the Foundation has argued that if as they believe – virtually no-one is using Sun’s Remote Procedure Call, then it had every justification in adopting the alternative Apollo-developed Net work Computing System/RPC pro tocol for its Distributed Computing Environment technology.
Sun’s network computer group manager Craig Brown, and Open Network Computing product manager Stuart Noyce refute the Foundation’s allegation. They say that companies including Visix Software, Frame Technology and Valid Logic – as well as Sun itself – have already in corporated RPC into their respective products, and others are in the process of doing so. Furthermore Brown and Noyce argue that according to the fundamental ways of doing distributed computing, the Foundation’s claim that Distributed Computing Environment is transport transparent is also suspect – at the very least there remain some unanswered questions about the technology in this respect. The minimum requirement for so-called transport transparency must be that a Remote Procedure Call can run over any network transport protocol – and there are a couple ways of achieving this. One is to choose at compile-time which transport protocol environment will carry the application. This will inevitably lead to multiple versions of any Remote Procedure Call, one for each protocol – and this decision process sits above the transport layer interface itself. The other – optimum – way is to get the required Remote Procedure Call to be executed at run-time. In this case the developer writes the Remote Procedure Call only once, and only one binary version is created. Brown and Noyce say that AT&T’s transport-independent Transport Layer Interface – now included in Unix – was developed with precisely this solution in mind, and that Sun’s intention is to integrate the Transport Layer Interface into the System V.4-compatible version of SunOS, the announcement of which will be made before the end of the year.
The Open Software Foundation has not said whether its Distributed Computing Environment will embrace compile-time, or the more preferable run-time solution. In addition Brown and Noyce say that Sun’s Remote Procedure Call can be de-coupled from the naming service or protocol layer – default options as well as development choices are included – overturning the notion that it is non-extensible. Turning to the file system itself, and the Foundation’s claim that Sun’s Net work File System is seriously outdated when compared to the Andrew File System from Transarc Corp, which it has adopted for the Distributed Computing Environment, Noyce and Brown say that trying to draw distinctions between the two is a waste of time because it is like comparing apples and oranges. They argue that because the Andrew File System was originally developed for a mainframe-based academic environment at Carnegie-Mellon University, and not on personal computers – first appearing at around the same time as Network File System 1.0 – by the time Andrew File System 4.0 for the Distributed Computing Environment is out, which could be anytime between six and 12 months for source code – longer for customer versions – we will see two products that are somewhat equivalent, although they added, there will be limitations to both. Transarc will have to add some personal computer function ality
to the Andrew File System – it doesn’t currently support disk booting – and the Foundation has additionally gone for PC NFS and Lan Manager/X to integrate personal computers. Meanwhile Sun, which developed Network File System on diskless Sparcstations is having – over the same time period – to add support for larger systems and servers, including local disk caching and a replicated file system function. Brown and Noyce stress however that Network File System will now support 60 or more clients, from micros up to mainframes, and is not limited to the 30 that the Foundation has claimed. However the Software Foundation’s charge that in any case Sun withdrew its Network File System 3.0 submission to the Distributed Computing Envir onment request for technology is true, the two say, but only because the Foundation stipulated a second quarter delivery date and we didn’t think we would make it with Network File System 3.0. They say the Foundation had the choice of going with Network File System 2.0 – and receiving automatic upgrades to subsequent releases – or with the Andrew File System. The main criticism they level at the Foundation however is that it is slowing down distributed computing.
In addition to the likelihood of two standards emerging – both of which developers will have to support – they say that the Foundation is reverting to the type of discussions about distributed computing that were raised in 1987 – Apollo for one wrote a white paper on the subject at the time. Brown and Noyce say the debate should be far away from Remote Procedure Calls and protocols by now, and on to distributed services and applicati ons software. Certainly the work going on to integrate Open Network Computing with Novell, 3Com and Banyan networking technology is a step in the right direction, it is said to be on schedule, and Sun itself will be showing off new distributed computing software at Networld in the Autumn. The one thing on which the protagonists do agree is that distributed computing remains more a vision than a reality, and that widespread use of the distributed technology is still some way off – as are the services and products to support it. One reason is that not all of the benefits are clear yet. It follows that the longer the players argue, the further the vision, the benefits – and the returns – will recede into the future. For potential users of distributed computing, apples and oranges could yet end up tasting more like lemons.