The chip giant says its proposals for a Computational Services Overlay, unveiled at the Intel Developer Forum in San Francisco yesterday, would mirror the original vision of the net, by sitting above underlying networks, masking their complexity and offering a platform for new services.
Unlike the original internet, though, Intel is proposing a system that would rapidly be commercialized, with Intel technology at its heart from the beginning.
Intel’s CTO Pat Gelsinger kicked off his keynote speech to the Intel Developer Forum in San Francisco with a conversation with Vint Cerf, one of the originators of the internet’s key protocols and currently a senior vp and chief strategist at MCI WorldCom. The two concluded that the net is hitting a wall, as it is called on to support exponentially increasing demands with an architecture dating to the 1970s. Challenges ranged from capacity, reliability, security, and accessibility to regulatory issues.
While initiatives were in motion to tackle some of the challenges – for example the rollout of IPv6 – these were long-term propositions, said Gelsinger, that did not offer much relief in the short-term.
At the same time, Gelsinger acknowledged that a rip and replace strategy was not an option. Rather, he proposed a Computational Services Overlay, that would offer a platform for new services even as issues like IPv6 were being addressed.
We want to abstract away from the problem while the problem is being addressed in the underlying architecture, said Gelsinger. This overlay could support services, such as dynamically assigning resources to handle sudden surges of traffic to particular sites, or identifying the sources of internet attacks so that corporate firewalls could be automatically sealed against the offending IP addresses.
Gelsinger said the company had already been putting in the foundation for this overlay, with the Intel-backed PlanetLab consortium. According to Intel, the PlanetLab model uses the internet to send data, with an overlay network of its own routers and servers… to add more capability. Applications are distributed across this overlay network which is capable of reorganizing itself dynamically.
As the PlanetLab model is adopted and commercialized, ISPs, carriers, corporations and other entities can add further nodes, expanding the network. ISPs, for example, could use the overlay to offer value-added services to their customers.
Unsurprisingly, Gelsinger said these nodes would typically be Intel-based servers, and upcoming technologies that Intel is planning to build into its chips, particularly around virtualization and security, will play a crucial role in supporting this model.
At the same time, he said there was no reason other processor architectures could not come into play, assuming they abided by the appropriate standards. That said, he admitted that Intel hoped to get a leg-up by kick starting the initiative.
Gelsinger insisted that while it was hoping to rapidly commercialize the concept, it wanted to emulate the same kind of research environment that powered the growth of the original internet. The company would work with the appropriate standards bodies, he said, and where appropriate make IP available on an open source or royalty free basis.
Gelsinger also denied that the company was looking to supplant Cisco as the dominant architectural player on the net. What we’re talking about is a new abstraction that sits on top of what they’re doing. We don’t eliminate what they are doing.
Intel cited support from Hewlett Packard for the initiative, while the US Public Broadcasting Service said it would work with Intel to use the PlanetLab model for HDTV broadcasts.
However, whether the industry flocks to Intel’s cause is open to debate. Gelsinger is traditionally the closing speaker at the twice-yearly Intel Developer Forum, and as such has to offer a grand vision to close each conference. Even for Intel, that is an awful lot of vision to turn into reality.