Sign up for our newsletter - Navigating the horizon of business technology​
Technology / AI and automation

VENDORS SCRAMBLE TO DELIVER HOT NEW SOFTWARE TESTING TOOLS

Part One of a two part article investigating the software testing market – from Computer Business Review, a sister publication.

On June 4 last year, the maiden flight of the unmanned Ariane 5 rocket ended in disaster. Only 30 seconds after take-off, the rocket veered off its flight path and started to break-up. Billions of dollars worth of mostly uninsured, satellites and probes that represented many years work by academic scientists literally went up in smoke. When the investigation team reported its findings, they concluded the disaster was down to a lack of testing of the Ariane 5’s flight control and guidance software. The alignment function of the inertial reference system, which detects, among other things, if the rocket is falling over on the launchpad, and which had worked perfectly well on older models of Ariane, had been ported to Ariane 5 and not fully re-tested within the new environment. As the European Space Agency investigation team put it: The alignment function, which served a purpose only before lift-off, remained operative afterwards. It only took a few seconds for the erroneous code to corrupt the rockets guidance and attitude systems, and as the rocket veered wildly off course the self-destruct mechanism kicked in. Fellow European software engineers pounced on the example of the Ariane 5. The British Computer Society’s Software Testing special interest group had little sympathy, awarding the failure the Bug of the Year accolade. But as the Ariane 5 case and many, many other disasters – both big and small, private and public – have highlighted time and time again, software testing is all about business risk management. What it also highlights is the perennial problem of software development: software testing is often seen as merely the tail-end of the program development cycle. Many people see testing as a washing machine where dirty software is dumped and washed clean, says Madan Sheina, a technology analyst at market researcher Ovum Ltd. While that washing load was relatively small when the industry was dominated by mainframe, minicomputers, hierarchical Cobol programs and dumb terminals, the typical target test environment in a commercial setting has since grown vastly in complexity. Technologies such as graphical user interfaces, client-server applications, rapid application development, and more recently, component-based development and the Internet have escalated the difficulty and the importance of software testing. That has given rise to an explosion in software tools that go a long way to automating the process of testing those more complex systems. With rapid application development tools, component-based object-oriented development, graphical user interfaces and now the Internet, the old manual methods of testing could not keep pace. Analysts at Yankee Group Research Inc are even more adamant: Testing requires automation to provide higher volume, speed and thoroughness of testing, but also to catch hard-to-find errors, co-ordinating multi-user tasks, and managing and tracking tests and test results. Good testing is not feasible without automation. Fellow market watcher International Data Corp categorizes the type of tools that can handle these new technologies as client-server automated software quality tools, as opposed to earlier products used to test in mainframe, Unix or proprietary environments. It reckons that between 1993 and 1995 the market for such tools tripled to $98m – but it suggests the real growth is yet to come. By 2000, says IDC, the client-server testing tools market will be worth more than $1bn and represent around 80% of the complete testing tools market. Researchers at Ovum are more conservative, putting the total market at $237m in 1997, up 43%, and rising to $575m in 2000. By far the biggest market is for test development and execution tools – products that help programmers develop test scripts and test cases. Of the whole testing tools market, 64% falls under this heading, according to Ovum. Alongside these main tool functions, are products for load-testing

and simulation. These assess whether the system will be able to cope with the projected number of users, transaction throughput, read/write rates and so on. Other testing tools fall into the categories of coverage analysis, planning and management, requirements analysis and test-case generation. The vendors are all heading towards the client-server space, but from very different directions. The market leader across the board is IBM Corp, but its offerings are still almost exclusively mainframe-based. According to IDC’s figures for 1995, IBM held 12% of the testing tools market, closely followed by Pure Atria Software Corp and Mercury Interactive Corp. But stripping out the mainframe players the picture of the real growth market – client-server – emerges. The lion’s share of that market is held by the company that arguably started it – Mercury Interactive. It captured 38% of the $98m client-server tools market in 1995. Behind it is the champion of the Windows testing world, SQA Inc, which holds 12%. With more of a focus on distributed testing tools, Segue Software Inc has 10% of the pie, and behind it, Pure Atria, a specialist in error- detection software, holds 5%. But that picture has been far from static. Over the past 18 months there has been frenetic consolidation and positioning by larger software concerns within the market. And the size of the deals gives an indication of just how hot the market for client-server testing tools has become. In early 1996 Pure Software Inc merged with configuration management software company Atria in a deal valued at $681m. Compuware Corp made its client-server play with the acquisition of Direct Technology Inc. But perhaps most significantly, programming tools powerhouse Rational Software Corp is buying SQA for $320m only months after it took control of Microsoft Corp’s Visual Test division. All these moves added momentum to the client-server side of the testing tools market – but the testing requirements are being stretched beyond that by two major developments. The switch to component-based programming, where developers plug together various objects of their own and others drawn from libraries, presents formidable challenges to the testing process. How can a tester verify the integrity of a piece of code, if it was specified and written by others? Microsoft, for one, suggests that there will be some process of authentication under which object code modules will be certified working as specified. The other main development in the testing tools market, as elsewhere, is the rise of the Internet. All the major testing tools companies are trying to address the challenges of the Internet. And while most see it as an extension of component software testing, if anything the rise of the Internet is the most potent argument for automated testing yet. Although the major vendors have developed strategies to address Internet applications testing, they have yet to deliver products. And as they do, this specialist market is likely to see a flux of new entrants.

White papers from our partners