View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
July 4, 1990

MISERS FAIL TO SWAY SHARERS IN COMMERCIAL PARALLEL PROCESSING DEBATE

By CBR Staff Writer

The share everything versus the share nothing issue was the main subject of debate on the first day of a three day conference on parallel processing, held at London’s Heathrow Park Hotel last week. Organised by Unicom, the conference’s theme was the suitability of parallel processing for commercial applications, specifically database management systems. Share nothing versus share everything refers to the way in which memory is accessed on different machines, which is crucial to processing speed. So far, the concept of parallel processing has mainly been applied in numeric-intensive scientific work via array processors for applications such as matrix arithmetic. As might be expected, companies at the event were at pains to point out the advantages of parallel processing in the commercial field – especially with their own particular products. Mostly, they seemed to be pointing out these advantages to each other. Around 60 people attended the conference, most of them company representitives or academics. Fujitsu, Intel and Teradata UK were among 16 companies with stands in the exibition room.

Plod serially

Speakers included representitives from Parsys, Cray Research and Stratus Computer. Parallel processing, with a single application broken up and shared over several or many processors, is seen as one of the most promising escapes from the way today’s computers plod through their tasks serially, one bit at a time. John Spiers, product marketing manager for Oracle Corp, defined the workload of a database server as inherently parallel – the total workload is made up of a large number of relatively small independent activities, simultaneously orginating from a large number of independent users. According to Spiers, this is in contrast to the traditional applications of supercomputers and parallel processors, which are concerned with small numbers of compute-intensive operations in areas such as image processing. Spiers reckons that with the availability of parallel systems, centralised computing will regain its position as a cost-effective alternative to distributed computing, avoiding the performance and manageability problems inherent in the use of networked configurations, and enable managers to address control, security and reliability problems more effectively. There are two types of parallel computers vying for dominance at the moment, both of which were represented at the conference. Kicking off for the first type was Encore Computer’s Vincent Rich, who extolled the virtues of the share everything approach, as used in Encore’s Multimax. This involves a relatively small number of processors – up to 20 in the MultiMax – communicating via a backplane bus.

By Sonya McGilchrist

All main memory is also linked to the bus, enabling the processors to access it simultaneously. The Multimax also incorporates a two-level cache memory subsystem on each processor board to hold the most recently accessed program statements and data for instant reuse. This, says Encore, prevents the performance degradation caused by processors sharing all memory, as it minimises references to the main memory. According to Rich, a shared, or as he put it, tightly coupled design offers elegance and simplicity, whereas loosely coupled or non-share systems suffer from the replication of program code and data on each node, communications latency on the network and difficulty in maintaining the global locks required to maintain the integrity of the database management system. Coming in with the share nothing approach was Susan Jakobek of Meiko. As Meiko is a Bristol based spin-off of Inmos – the people who designed the Transputer chip – her approach was not surprising. During a presentation entitled Information Flow in an Enterprise, Ms Jakobek clearly identified the basic issues in parallel computing. Along with other delegates she outlined the fundamental requirements of a database management system, splitting it into four main areas. Firstly, the database should convert raw data into useful information, instead of acting simply as a computerised

filing cabinet. Secondly, it should be scalable – increasing the size of the system as the workload increases. Thridly, flexibility and lastly, cost-effectiveness. Ms Jakobek also pointed out that in the end it boils down to one thing – speed. The speed at which the processors can access the memory and therefore process the application is obviously crucial in systems that claim to be the answer to intensive, time consuming applications. Meiko has ported Oracle’s relational database on to its Computing Surface architecture which uses hundreds of Transputers – the exact number depends on the workload. Each processor has access only to its own memory – hence the share nothing tag. Meiko says that one of the basic advantages of the system is the ability to configure the processors precisely, according to the size and type of the database workload, therefore optimising resources. Also, small amounts of processing power can be added at a time enabling users to match their processing power to their applications as closely as possible. Shared memory machines, according to Ms Jakobek, do not address this scalability issue. She reckons that the effect of common data and shared memory is that the machine is limited by the speed of the memory access mechanism and the inability of the architecture to accomodate increasing resources. At some point, increasing demands on shared memory will cause a bottleneck, reducing the speed at which the machine can process applications. She did not mention that on non-share memory machines, there will ultimately be an interconnect bottleneck, caused by the fixed bandwidth of the interconnect. Other speakers addressed the share/non-share issue, splitting into one or other of the camps.

ICL Europroject

Content from our partners
<strong>Powering AI’s potential: turning promise into reality</strong>
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

Guy Haworth of ICL explained why a share-nothing design has been chosen for the EDS European Declarative System project. The object of EDS, on which ICL is working with Bull and Siemens, is to develop a scalable, parallel system which in the main will be an SQL and information server. The basic reason for choosing share-nothing design is that the companies feel that computer architecture should avoid shipping data en masse, as share-everything systems do. The EDS system consists of eight to 256 processing elements. The system will, according to Haworth, scale linearly with size, with a 256 element machine targetted at 12,000 transactions per second. Despite the considerable discussion and debate, both of the sides of the argument seem firmly entrenched on their respective sides of the fence. Meanwhile, John Spiers of Oracle, whose databases are running on types of system, pointed out the advantages and disadvantages of both – and noted the emergence of specialised hardware achitectures based on parallelism, dedicated to running Oracle.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU