While mainstream PC microprocessor clock speeds stay below five Gigahertz, the multi-core numbers are starting to increase. Dual-core microprocessors are available now from AMD and Intel, with Intel already shipping quad-core devices and AMD promising availability later this year. Sony’s Cell processor, powering the Sony PlayStation 3, has nine CPUs, and Azul Systems produces the 48-core Vega processor for its Compute Appliances. We are also witnessing a return to the numerical accelerator.
In the early 1980s, at the start of the PC revolution, it was common to purchase a numerical processor that plugged in next to the CPU and boosted the floating point operations capability of the computer. Subsequently, the chip manufacturers embedded floating point processing within a standard CPU, but now we are seeing a return to side-by-side chips for carrying out number crunching, exploiting the capabilities of multi-cores.
Thus, ClearSpeed produces the CSX600, a parallel processor that has 96 cores and executes 25 billion 64-bit floating point operations per second. The vendor’s Advance Accelerator Board has two embedded CSX600 chips that will fit into a standard PC.
Last September, Intel announced a five-year plan to deliver an 80 floating-point core chip based on advanced optical technology for the complex inter-connections, delivering one teraflop (a trillion floating-point operations per second) on a single device – super-computing capabilities on a desktop PC that, not long ago, were possible only on state-of-the-art, exclusive machines, costing many millions of dollars.
In order to exploit these new devices, developers will need to program using parallel algorithms. This means the developer needs to identify parts of the code that can be run simultaneously, and assign these to the different cores. Parallel programming has been researched for many years for the needs of high-performance computing (HPC). How well these techniques filter down to the general developer will depend on the software tools available to support multi-core programming.
Computer languages will need to be able to recognize core availability and have facilities for assigning work. There is a correspondence between multi-core assignment and the way server load-balancing is performed in a data center.
The availability of cheap multi-core chips will open the door for parallel programming to become commonplace, but the software tools will need to keep up. Currently, most custom code needs to be written to exploit the increasing number of these new generation devices
Parallel for the masses will work if the chores can be automated. However, research has shown that what cannot be automated is the programmer’s mind, deciding what can and cannot be turned parallel.
Source: OpinionWire by Butler Group (www.butlergroup.com)