First, most efforts failed to account for the fact that business requirements are highly dynamic and therefore not easily solved by replicating software functionality from a snapshot in time. And, of course, reuse dreams have foundered because of the cowboy culture of software developers, who have long considered it unmanly (this was a field dominated by men) to use somebody else’s code when yours, of course, would be better.

Not surprisingly, the approaches at each point in IT history reflected the understanding of the times: during the mainframe era, when computing was highly centralized, the prevailing thought was to simply make carbon copies of code which were subsequently modified into often unrecognizable forms.

With the emergence of relational databases in the 1980s, the conventional wisdom was that enterprise processes were data driven, which begat Computer-Aided Software Engineering (CASE). But that effort fell on its face on its top down approach that was centered on the all-elusive, and inevitably out of date, enterprise data model.

When distributed computing emerged during the following decade, we all grew older and wiser with a newfound embrace of object-oriented (OO) computing approaches that would provide a far more flexible approach to dealing with dynamically changing business processes. It was indirectly helped by a new generation of client/server 4GLs, from Microsoft Visual Basic to PowerBuilder, Delphi, and Gupta SQL Windows.

These languages helped tame OO by limiting or eliminating problematic features such as multiple inheritance, where developers could all too easily find themselves wading in over their heads when inheriting an object with complex lineage carrying properties that they weren’t aware of.

But credit Microsoft for really popularizing the agenda towards component-based development through the COM model that was embedded through VB and Visual C++. It was put to excellent use in Office, which showed that embedding components was actually something useful: through simple cut and paste, you could embed an Excel spreadsheet or cell inside a Word document or PowerPoint presentation without having to be a programmer.

And best of all, because those were loosely linked objects, they could automatically update: if you opened a Word document with an Excel spreadsheet you would automatically view the latest version by just clicking on it.

Yet these advances made reuse possible only on a small, highly contained scale (e.g., all applications written in the same programming language from the same server or workgroup), and therefore limited potential benefits. Besides, efforts to apply reuse more on an enterprise level required too much up front work, reminiscent of efforts to build those ill-fated enterprise data models of the CASE era.

The advent of services-oriented architectures (SOAs) have given the idea of reuse a potential new lease on life. With SOA, objects declare themselves and, for a change, a critical mass of the industry has gotten behind the basics of standardizing what a service envelope looks like, how services are described, how they can be discovered, and how security policies associated with the service are expressed.

In other words, with SOA, programmers need no longer guess at the interface because it is published, and because there has been universal vendor acceptance of at least the basic building blocks, there is the hope of cross platform reuse.

In a McKinsey Quarterly article published in mid 2006, the contention was made that software architecture is not enough for cracking the reuse riddle. To really achieve reuse, internal development organizations need to think of their offerings as products.

That meant adopting the business practices of software vendors. In other words, designing software based on market demand, and with the assumption that it will likely deploy on a wide range of environments that you may not be able to predict. That’s because you have to serve a wide enough market to make money, or if you’re an internal software development organization, justify your budget.

And you need to take a full lifecycle approach to managing software. That means more closely engaging customers (the business) in planning and marketing the software, and standardizing application delivery, support, maintenance, and criteria for enhancing or retiring the software at end of life.

All this assumes that you work with the software assets that are in place. There’s no use in reengineering the past, except possibly for a few highly used processes that may already be duplicated throughout different silos, such as how to process a customer order. That would be a great place to expose a common service.

More fertile opportunities come from new development projects. The best opportunities would probably come from composite applications that utilize existing assets.

Ideally, there would be some form of agile development process in place so you won’t get bogged down in analysis paralysis. Nonetheless, making software products requires some form of analysis because all products require some market research to identify and verify demand.

Will 2007 be the year when IT organizations finally reap the benefits of reuse in critical mass? Clues will emerge as organizations begin ramping their SOA implementations. SOA provides the technology base that could make a change in software lifecycle management practices possible, but it could also make it too easy to simply proliferate new services under the guise of lightweight development processes.

Obviously, the jury’s still out.