Sign up for our newsletter
Technology / AI and automation

HOW THE FAIRCHILD CLIPPER LIVES UP TO ITS NAME – PART TWO

Geoff Conrad concludes his review of the Fairchild Clipper chip set

The Fairchild Clipper chip set, already running at a blazing 33MHz and outperforming its rivals with bursts of up to 33 MIPS and a sustained throughput of over 5 MIPS, is claimed to be just a starting point rather than the limit of technology. By the end of 1987 a version with a throughput boosted to 10 – 15 MIPS is promised. How Fairchild’s uncertain future will affect this is not known. The US Justice Department is currently investigating the implications of a proposed merger with Fujitsu’s US chip interests to create an 80% Japanese-owned company. (The Pentagon is concerned about a possible Japanese future for Fairchild’s high-speed gate arrays and memory chips. Most observers dismiss the mooted solution of nationalisation, an anathema to the Reagan Administration. But compared to secretly shipping arms to Iran, it may be seen as a trivial realignment of policy and principles… Both options may seem equally unpalatable to the Reagan Administration, but the chances of a US corporation taking Fairchild off Schlumberger’s hands are vanishingly small. The company is pretty special to be able to turn in a loss despite the untold millions it gets from the Pentagon in lucrative defence pacts).

Strategies

The company has three major strategies to speed up the Clipper: improvements in the CMOS silicon technology to pack in yet more logic – it already integrates 846,000 transistors; using the extra silicon real estate to provide bigger caches and improve the memory interfaces; and to decrease the number of clock cycles per instruction – presumably by increasing the number of hardwired RISC instructions. The current implementation has 101 hardwired simple instructions, with a further 67 complex macro-instructions programmed into ROM. The macro-instruction unit is only used for these instructions – there is no decoding delay when executing the simple, hardwired instructions. Apart from LOAD and STORE operations, the only ones that are allowed to access the memory, the simple instructions all operate on data held in very fast registers, as in a standard Reduced Instruction Set Computer. The Clipper’s macroinstruction unit has its own set of scratchpad registers, and also performs branching controls. Most of the complex instructions deal with floating point conversions, string operations, and trap and interrupt handling. The on-chip floating point unit executes at over 2 megaflops and processes in parallel to the three stage, pipelined integer execution unit. And on-chip floating point was a major factor in the decision by benchMark Technologies of Kingston, Surrey, to offer the Clipper as an option on its benchMark 32 Unix system. The company is extremely enthusiastic over its performance, claiming that the 2 Megaflops gives seven to 10 times the floating point performance of the Motorola 68020 with a 68881 co-processor. Fairchild claims that the Clipper chip set – a 32-bit CPU linked by two 32-bit buses, one for data and one for instructions – delivers up to 133 Mbytes per second from the separate 4Kb instruction and data caches, each of which has a built-in memory management unit. It has a 4Gb physical address space and hardware support for virtual memory. It makes extensive use of registers, buffers and cache memory to improve performance – there are enough register sets to support context switching: the Clipper simply leaves the registers being used by an application intact and switches to a new set when control is passed to a new process, rather than moving everything into memory and loading a fresh set of data. It has four cache strategies to optimise system performance and maintain data integrity: * Non-cachable: All data accesses are routed directly to main memory – this mode used for communicating with the input-output space in the main memory, for example; * Write through: Data modified in the cache is immediately modified in the main memory as well, so the main memory data matches the data held in the cache at all times. * Copy back: Data modi

White papers from our partners

fied in the cache is only updated in the main memory when those cache locations are replaced. Performance is improved by saving memory accesses, but the main memory data is stale and does not correspond with the cache data until updated. * Bus watch: With multiple caches, it is essential to keep all the cached data consistant with other caches and main memory. Data in main memory can be changed by virtual memory paging or direct memory access at any time. The caches watch the Clipper Bus for memory addresses that match their contents. If data is written to a cached memory address, the cache automatically updates the cache as well. If data is read from a cached memory address in copy-back mode, the cache rather than main memory supplies the data as the data in the cache may be more current than the data in main memory.

Hit rate

Fairchild claims a cache hit rate of over 96% for instructions and over 90% for data. This is helped by burst-mode updating: each request to main memory results in four sequential 32-bit words being loaded on the bus, if the cache needs them. (In high-level programming, very few accesses are for single words. The four words are loading into a superfast quadword buffer. The caches also prefetch the next sequential quadword to improve the hit rate).The CPU has a three-stage pipeline, the stages being fetch, when the caches access main memory and the CPU loads its buffers; decode – instructions are decoded and the resource manager schedules the control flow; and execute, the floating point unit and integer execution unit operate in parallel, as do the three stages of the CPU pipeline. However the integer execution unit is itself a three-stage pipeline, allowing four instructions to be executed concurrently in the execute unit. Fairchild has taken full advantage of the fact that it did not have to maintain upward compatibility with an architecture designed in the Dark Ages – what could Motorola or Intel designers have done if they had started with a clean sheet of paper (don’t say the iAPX-432)? And if what is out really is just a starting point, with significant performance increases to come, the next generation promises to widen the distance between the Clipper and its competitors even more.


This article is from the CBROnline archive: some formatting and images may not be present.

CBR Staff Writer

CBR Online legacy content.