View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
August 16, 1996


By CBR Staff Writer

From Computer Business Review, a sister publication.

Intelligent computing is taking a large step forward. Technology which simulates real life, growth and evolution has been born. When the US Justice department set out to investigate the monopolistic tendencies of software giant Microsoft in 1994, it did not just listen to the grumblings of rivals. To try and extrapolate the true significance of the company’s might, it used some of the most advanced computing technologies available to produce a white paper looking at how the market would be affected if Microsoft was left unchecked. One of the key technologies employed in the project was ‘artificial life’ (a-life), a branch of computer science that is gradually making its way out of the labs and into corporations and government agencies around the world. A-life is already used in a variety of applications, from managing millions of dollars worth of financial market investments to designing telecommunications networks. And, propelled by a tiny cadre of researchers-turned-entrepreneurs , the technology is challenging some of the notions of exactly what data and information is, and even what constitutes ‘life’ in the first place.


The ethos behind a-life is not new. Since the creation of the first electronic computer nearly 50 years ago, researchers and developers have been trying to develop machines which behave like living things. Robots, with mechanical arms and assembly-line mentality, came first, followed by artificial intelligence (AI) attempts to clone the expertise of humans onto computer disks. Like AI before it, a-life attempts to use biological, social, and/or human behavior as the basis for a highly complex level of computing. But whereas AI was specifically concerned with creating machines that thought or reasoned like humans, a-life is more interested in how biological entities – from basic cells to more complex systems like insects and mammals – adapt and thrive within their environments. These technologies take their inspiration not from the logical functions of the brain, but from the life and death of a living organism. The rationale is that if computer programs, which change and evolve over time, can be viewed as a ‘life’ form, then their development can be measured and to some degree predicted over time. As in organic evolution, viable data structures – those which have the strongest set of attributes – thrive, while incomplete or lesser data sets die. The entities (sometimes called biologies) within an a-life world live in system software, are programmed with parameters for existence, and evolve and fight for survival depending on appropriate adaptations to the program. Sometimes thought of as controlled viruses, these computer creatures learn to replicate, to protect themselves from predators, and to maintain a survival level that will ensure their continued evolution. One of the overall aims of a-life is to see whethe r or not systems can get to the point where they can evolve to do things that would be too complex to program. While a-life is a unique computer science, it has absorbed a number of technologies that were once considered radical aspects of mathematics and computer science, such as cellular automata and genetic algorithms. A-life is not really one technology but a collection of different and fairly well-understood technologies which can be used together to simulate life-like situations. These include AI, neural networks, intelligent agents, genetic algorithms; and computer-based simulation.


These pursuits all have some inherent ‘biological’ nature, hence their grouping under the a-life heading. Genetic algorithms, for example, use advanced computing techniques to study how a particular class of ‘organism’ might evolve, based on inherited characteristics and mutations. Cellular automata, which has its roots in the theoretical work of computer pioneer John von Neumann in the 1950s, treats pieces of data like biological cells.

The key point is that, in living entities, individual cells are affected by what happens to cells around them. Sometimes the death or malfunction of one cell will adversely affect those cells closest to it, but leave others unscathed. To explain this more graphically, the cells around a knife wound in the hand are more likely to die or be thrown into disorder than those cells which are located half a body away in the leg. Applying this metaphor to the stock market, certain economic changes, such as falling interest rates or major shifts in the price of gold, will affect certain stocks much more than others. By modeling stocks as a groups of cells, and applying outside forces to them (a figurative knife), financial analysts can see how they react. Another possibility would be to simulate the long-term growth of a business using a-life. A model of a start-up company could be allowed to evolve in the confines of a computerized business environment, where outside factors are controlled by the user. Weaknesses in the company’s plans might become evident when certain factors are introduced into the business model (factors similar to disease or predators in the real world). Users could make changes to the basic model to help strengthen resistance to these factors. The advantage is that viewing 10 years of the new company’s potential growth, with a variety of external factors, could be done in just a few hours. Using the precepts of a-life to improve financial yield or build business plans may seem like a monumental leap of faith to those weaned on the cold, hard facts of computer rigidity. Yet, it is actually rather easy to separate a-life fact from fantasy. The problem for most non-technical professionals trying to understand a-life is wondering how a program can be created that actually evolves, grows or changes without human intervention. After all, software programs are overwhelmingly designed t o do only what their programmers intend them to do, and as such are rigidly controlled. How then, can components of a program live, die, or even grow stronger within the confines of a computer? Understanding the principle of a-life is as easy as understanding the rudiments of a simple board game called Life, developed by John Conway in 1970. Played on a basic checkerboard of the type used for chess, Life is something of a combination of checkers and the game of Pente (where tokens or game pieces are flipped depending on their position relative to other pieces around them). In Life, all of the board squares, regardless of color, are cells. Each cell has eight neighbors whose borders touch that cell at some point (four neighboring cells are flush, and four more touch only at the corners). A cell is either active, or ‘ON’, when a piece rests on it, or inactive and ‘OFF’ when it is unoccupied. A beginning condition is chosen at the player’s whim: say, four ON cells in a row. Once the opening position is established, two rules are applied throughout the game. The first is: a cell is turned ON when three of its neighbors are found to be ON. The second is: a cell remains ON if two or three of its neighbors are ON. A condition of this is that if four or more neighbors are ON, then the cell is turned OFF (thereby killing it). A single generation of the game involves making all the necessary changes to the board and its cells based on applying these rules to the opening position.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester


Depending on the opening position(s), the board changes dramatically over time. It can fill up quickly, die quickly, or last for hundreds of generations without much turmoil. Conway found that the checkerboard soon became too physically limiting for Life, and he ultimately brought it to the computer world. There it found thousands of enthusiasts enamored with its chaotic and unpredictable machinations. Despite a growing legion of interested participants in the programming community, this kind of computer-generated evolution failed to find a name or even a real visionary until 1978, when Chris Langton came up with the term ‘artificial life’ to define a possible integration of computer science, biology, and anthropology. Since then, Langton and his associates from The Santa Fe Institute, Los Alamos Laboratory, and several US universities have been setting the pace for the lab development and small business commercialization of a-life. But even if the theory is sound, it is in practice where a-life must prove itself, and this is only now beginning to happen. While there has been a rash of a-life start-ups over the past decade – notably spin-offs from MIT and Los Alamos – there are still less than 20 companies actively marketing any facet of a-life, from genetic algorithms to self-generating worlds. Many have good pedigrees: Prediction Company was founded by Doyne Farmer and Norman Packard of Los Alamos; Thinking Tools is a spin-off of Maxis Corp, the company that developed the popular quasi-business game SimCity; and Redfire Capital Management came out of MIT.


There is a reason for this minuscule number of a-life companies: the transition from lab to viable business is not easy. The a-life algorithms and GAs that have been written about in books aren’t necessarily the ones that work in the real world, s ays David Davis, president of Tica Technologies. We’ve discovered techniques that work better in production environments, and they require a great deal of tailoring. Quite honestly, there aren’t a lot of people that know how to do that tailoring yet. Not coincidentally, the companies that are involved in a-life tend to focus on job-specific projects and not packaged applications that can be purchased at the local computer store over the counter. With the exception of Axcelis’ Evolver, a $350 package that works with Microsoft’s Excel spreadsheet, BioComp’s $195 Neuro-Genetic Optimizer for Windows and a few pieces of games software, there are not really any consumer, or even small business, packages available. We aren’t interested in productizing what we’ve developed, admits Prediction’s Farmer. We’re very secretive about our technology. If it was packaged, other companies would have access to the techniques we use. We’ve done a lot of work in this area, and we’re not necessarily interested in giving it away. Plus, there are the problems of packaging and maintaining software, and that involves marketing, and we really don’t want to be involved with that. Tica’s Davis agrees with this, and adds, Trying to get an a-life program to work off-the shelf right now doesn’t necessarily make a lot of sense for the majority of potential users. Regardless of the algorithm, they don’t automatically work with the system you’re trying to optimize, because you don’t know the spec ifics of the problem that they will be applied to. Instead, Tica is looking into the possibility of integrating its technology with existing scheduling and manufacturing packages, which will allow its algorithms to take advantage of established environments with minimal modification. Whether they are project or product-based, this small number of a-life purveyors adds up to a relatively small marketplace: less than $10 million a year in 1996, according to analysts The Relayer Group – and that includes consulting fees. Most of this $10 million is derived from the large-scale development projects mentioned above.


If a-life were to catch on with the same sort of vigor that propelled advanced technologies such as expert systems and neural nets, the market for a-life technology could quickly quadruple on an annual basis. In the technology’s favor, there is a growing perception that traditional computer methodologies such as database sorting and mathematical modeling are outdated and are no longer applicable in a world where technology changes every six months and data overload is a fact of life, not an i mpending nightmare. Neal Goldsmith, president of Tribeca Research, a New York management consulting and policy research firm, sees a-life developing beyond simulation and modeling. A-life could be an actual growth mechanism at some point in the fut ure, he says. You could develop software with certain constraints and characteristics, and let it evolve and generate until it reached its most robust iteration. At some point, the Web may have to be self-sustaining, since it will be too big for any one system or set of systems to control or maintain. Managing it or controlling security may be up to an a-life program that grows along with the Web and is part of its natural evolution. Other factions within the computer industry also hope that a-life could provide improved security for an increasingly interconnected data world. Of particular interest are the computer viruses that plague computers today, which are really an elemental form of a-life. These viruses, like their organic counterparts, multiply and feed off of their host environment, spreading quickly and existing only to exist. Computer professionals are looking at how a-life can be the basis for more intelligent programs which could help to kill off dangerous computer viruses, and at how these applications might grow according to the needs of the system or user.


A-life, say academics, could also be applied to the concept of self-replicating robots, which might each have the power to build another like itself (from available parts). The new generation would have the benefit of their forebears’ experience in performing certain operations and creating a higher level of efficiency. The driving ‘evolutionary’ force within the robots would allow them to pass on information they had gleaned from ‘experience’ by transferring code to the new model. Robot opera tions that were failures (such as, ‘do not attempt to roll down stairs’) could be transferred and become an inherent part of the new system – a built-in instinct for survival that would allow the new machine to avoid such danger. Real life examples of a-life may be few and far between but, if the jump from organic to computer-based life is made, the effects will be dramatic.


Tica Technologies of Cambridge, Massachusetts, has been using a-life technology for eight years. Working with US West, one of the most technologically aggressive of the Regional Bell telecoms companies, Tica has been applying genetic algorithms to t he task of network routing design and optimization.The constraints on this type of problem aren’t linear, and therefore you can’t use traditional software tools to adequately represent the problem, says David Davis, president of Tica. In situatio ns like US West’s, it’s a case of a great deal of a corporation’s money riding on even the slightest amounts of improvement. We get those improvements by using generic algorithms (GAs) to optimize data that already exists, and to enhance the perform ance of systems that are already in place. This is not a simple process. The use of a-life requires a tremendous amount of tailoring, says Davis. You can’t just take GAs off the shelf, for instance. Most of our work involves interfacing with dat abases, so you have to figure out the constraints of the problem and get the data lined up in such a way that it can be optimized, since nobody wants to rewrite the contents of an entire database just to optimize it. Applying the proper algorithms to the appropriate circumstances is more involved than say, using some neural nets, because you have to deal with more than just specific data points. You have to optimize the relationship between data points. In addition to its telecom work, Tica is also heavily involved with scheduling, especially job-shop scheduling for manufacturers. According to Davis, In manufacturing, if you have to turn down business because you can’t handle it since you haven’t scheduled properly, then that’s just th rowing money away. Any extra work that you get from optimizing your schedule, therefore, goes straight to the bottom line. The financial industry also has an interest in a-life. The guys on Wall Street love this stuff, he says. We frequently have people going to New York to speak about the benefits of the technology to various groups of traders or portfolio managers. Founded by a-life pioneers Doyne Farmer and Norman Packard, Prediction Company of Santa Fe, New Mexico has an exclusive contract with Swiss Bank to manage a substantial amount of its investment dollars using a-life. Prediction’s business, like that of all a-lifers, involves the optimization of data. Prediction, however, is looking for those barely perceptible variables, maybe five or six out of hundreds, that will affect the movement of the stock market. We do proprietary trading for Swiss Bank, they give us capital to trade with and we get a percentage of the trading profit. That’s our business. As part of that, for example, we’ll try to determine the movement of the Standard & Poor’s Index. The question we’re faced with is how can you forecast movement with this huge amount of data? We look at which data points might be relevant on a given day; whether they’ll be relevant today but maybe not tomorrow. We look at what moves the stock market, using a list of variables including everything from interest rates to oil prices.In general, I think there’s a lot of interest in a-life and related technologies right now, continues Farmer. People in large companies are willing to see if this works. But a-life is by no means a mainstream solution at this point.


A-life is just one of the advanced computing technologies designed to bring added intelligence to computing – two other are neural networks and intelligent agents. Neural networks are made up of a mesh of neuronodes linked by a web of wires and are designed to mimic the function of the human brain. Unlike a-life, neural networks do not ‘evolve’ over time, their essential structure is set but they do allow companies to develop systems which can ‘learn by experience’. When an input is sent to the neural system, the computer produces an output which a human expert says is right, wrong or partly correct. If an output is deemed correct, the network lowers the resistance on the correct neural pathway so it will produce the same result from the same input next time. Commercial use of neural networks has been limited. Although it was first developed more than 30 years ago, and 70% of the largest companies are believed to be piloting the technology, few proven applications exist. Intelligent agents, however, are being widely deployed,- particularly in conjunction with the World Wide Web. What constitutes an agent is a point of considerable debate but, primarily, it is a piece of software that acts semi-autonomously on behalf of a user and does so with reference to a knowledge or rule base. For example, users can harness agents to track news events across the Internet – the user sets some initial parameters and the agent, through repetitive experience, refines those rules to provide the most relevant information possible. Most a-life programs have intelligent agents at their core, performing tasks and learning from responses. The aim is, through a continual process of evolution, to produce a race of ‘super agents’.

By HP Newquist

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.