Saturday, January 01, 2005

The Computer That Started It All

Imitation of Life: How Biology is Inspiring Computing

By: Gualtiero Piccinini Ph.D.

 rev_v7n1piccinini_2

One way to boost our military capabilities is to create faster, more efficient computing technology, and one way to do that is to model computing on biological systems. Or so goes the thinking of scientists at the Defense Advanced Research Projects Agency (DARPA), which describes itself as “the principal Agency within the Department of Defense for research, development, and demonstration of concepts, devices, and systems that provide highly advanced military capabilities.” In Imitation of Life: How Biology Is Inspiring Computing, Nancy Forbes, who has worked on computing projects for DARPA, gives us the cook’s tour (or, in her words, “a nonexhaustive survey...written at the level of a technical generalist”) of the ways that biology has shaped computer science. If you wonder why 176 pages is not enough for an exhaustive survey of “bio-inspired computing,” Imitation of Life will open your eyes to a virtually limitless domain, stretching from artificial neural nets to DNA computation to artificial life to computer immune systems. Except for a single chapter, the author’s thin overview almost omits the flip side of bio-inspired computing, which is computation-inspired thinking about problems in biology. 

Compared with biology, computer science is a neonate. As a systematic discipline, biology goes back to the ancient Greeks (particularly Aristotle), whereas computer science can be traced back to the 1930s. Since its birth, computer science has learned from biology like a younger sibling. One of the earliest and most fundamental of biology’s influences on computer science can be found at the heart of modern computing: the design of computer circuits. Computer circuits are made of micro-devices called logic gates, which perform simple logical operations. For instance, a NOT gate reverses the values of its inputs, turning a 0 into a 1 and a 1 into a 0. Another type of gate, called AND, yields a 1 if and only if it receives two 1’s, otherwise it yields a 0. Various combinations of these gates can turn any finite string of 1’s and 0’s into any other string. This is binary logic. In theory, you can create a circuit of logic gates to solve any problem defined over finitely many inputs for which you can specify every single step in the computation process. This principle, which underlies the design of today’s computers, was discovered by two scientists of the brain. 

Before meeting Walter Pitts in 1941, Warren McCulloch, M.D., spent 20 years trying to explain the mind. McCulloch thought of the brain as a logic machine that manipulated mental symbols according to rules of deduction. By applying appropriate rules to the symbols, the brain could extract perceptual representations from sensory information and generate chains of reasoning to enable human thought. The brain, as a logic machine, used the all-or-nothing action potentials of neurons as symbols to use in logical processes. Neurons can only fire or not fire in response to a given degree of stimulus. But McCulloch did not know how to analyze his neural machines mathematically to discover what they were capable of computing, so he recruited Walter Pitts, whose math skills were soon to be legendary. 

In 1943, the two published an epoch-making paper. By thinking about the roles neurons play within a logic machine, they came up with the notion of a logic gate. They also showed how their logical neurons —their biological logic gates—could be wired together to build computer circuits. Their method was soon adopted by Hungarian-born prodigy John von Neumann to describe the design of the new digital computers. 

BUILDING ON THE ANALOGY

Forbes discusses only generally von Neumann’s work but covers its effect in depth. He appears to be the major inspiration for writing this book. She notes that von Neuman realized early “that nature had created the most powerful information processing system conceivable, the human brain, and that to emulate it would be the key to creating equally powerful man-made computers.” 

Von Neuman realized early “that nature had created the most powerful information processing system conceivable, the human brain, and that to emulate it would be the key to creating equally powerful man-made computers.”

In 1945, von Neumann and other scientists were designing the EDVAC, which became the prototype of modern, stored-program digital computers, and von Neumann got the job of describing the new machine to the world. The EDVAC used many complicated circuits, which had been designed by electrical engineers, but von Neumann realized that when describing the EDVAC design, what mattered was not details of the electrical circuits, but the logic underlying them. How did the circuits compute? Von Neumann turned to the McCulloch-Pitts logic, describing EDVAC’s circuits in terms of logical neurons. Since then, logic gates have been the fundamental building blocks of all computers. 

The story of how von Neumann shaped computer design along the lines of McCulloch-Pitts logical neurons would have been the perfect entree to Forbes’s book, but she misses the opportunity. She does mention McCulloch and Pitts’s theory of the brain, although she misspells “McCulloch” as “McCullough.” She also states that McCulloch and Pitts originated the field of neural network computing, which is true, and that theirs was the first attempt at a mathematical theory of neural mechanisms, which is false. Mathematical theories of neural networks go back to Nicolas Rashevsky and his group of mathematical biophysicists in the 1930s. A glaring omission is the lack of discussion of the way McCulloch and Pitts’s theory inspired computer design. 

The analogy between brains and computers has become increasingly elaborate. Computers are described as electronic brains, machines that can think faster than humans. Conversely, brains are described as neural computers, and mental processes are explained in terms of neural computations. McCulloch, Pitts, and von Neumann were among the first to draw the analogy, of course, but they also noted disanalogies, and one of them piqued von Neumann’s special interest. 

“AMORPHOUS COMPUTERS”

In computers, every wire and logic gate has a job, without which the computer does not work properly. Any malfunction of any component leads to a breakdown of the entire process. We are all painfully familiar with computer breakdowns, but in the 1940s and 1950s they were almost continual. Early computers were prototypes designed by engineers pushing the new technology to its limits. Parts had to be fixed or replaced constantly. Every computer required full-time technicians to keep it working. Von Neumann asked how computers could be made more reliable, and his thoughts turned once again to the brain.

Unlike computers, brains seemed to work properly most of the time, and without the need of constant repairs. Yet, von Neumann thought there was no reason to assume that neurons were perfectly reliable. Therefore, the brain must have a way to generate reliable computations from unreliable components. 

Unlike computers, brains seemed to work properly most of the time, and without the need of constant repairs. Yet, von Neumann thought there was no reason to assume that neurons were perfectly reliable. Therefore, the brain must have a way to generate reliable computations from unreliable components. From conversations with McCulloch, von Neumann knew that brains had much redundancy built into them; multiple channels and neurons were devoted to the same activity. Von Neumann wrote an original paper on how to do reliable computation from unreliable components by exploiting redundancy. But, Forbes points out, von Neumann was overly pessimistic; the computer industry has managed to build more reliable components. Computers today are just as brittle as the early ones—break a part and you will break the whole. But because they have more reliable parts, they break down less often. Not every researcher has given up the goal of intrinsically greater reliability, however, and the inspiration remains biology. 

Forbes describes the work of Gerry Sussman, Ph.D., of the Massachusetts Institute of Technology, who shares von Neumann’s goal of building computers that achieve correct results even in the presence of hardware failures. Sussman looks to biology for tips, but he looks beyond the brain. He designs communities of processors that compute together by communicating by radio like “a colony of cells cooperate to form a multicellular organism under the direction of a shared genetic program, or a swarm of bees work together to build a hive, or humans group together to build cities and towns without any one element being indispensable.” 

Sussman’s machines are somewhat like parallel supercomputers, which compute faster than ordinary computers by dividing the work among thousands of processors. An essential difference is that, whereas ordinary parallel computers have electrical wires connecting their processors, and work in a fixed pattern, the processors in Sussman’s machines are not physically linked. They talk to each other by radio and can reconfigure their pattern of interaction as the need arises. If one of the processors stops working, the others start ignoring it and keep going on their own. Sussman’s colleague, Hal Abelson, Ph.D., explains the advantages of computing in a team of unconnected units: “First, errors and faulty components must be allowed and won’t disrupt the information processing or affect the end result; second, there will be no need for fixed, geometrical arrangements of the units or precise interconnections among them.” 

According to Forbes, the challenge is software: How do you get thousands of unreliable processors to self-organize in a constantly changing way and still perform useful computations? 

The hardware principle of these “amorphous computers” is clear enough. According to Forbes, the challenge is software: How do you get thousands of unreliable processors to self-organize in a constantly changing way and still perform useful computations? Forbes tries to highlight the difficulties as well as the promise of biologically inspired computing. This is admirable, but she tends to neglect crucial pieces of information. In discussing amorphous computing, it would be helpful to remind readers that programming ordinary parallel computers, which are connected in a fixed pattern, is difficult, and sometimes impossible, because of the difficulty of breaking down computational problems into subproblems that can be independently solved by distinct processors. Programming amorphous computers adds the further difficulty of dealing with changing connections between the processors, so programming remains the greatest obstacle to progress in amorphous computing. 

If these and other difficulties could be overcome, amorphous computing could yield great benefits. Besides giving us more robust, fault-tolerant computers, amorphous computing might be used to create “intelligent” materials such as paints, gels, or concrete that incorporate tiny computing elements within them. Objects made of intelligent materials could detect events, respond appropriately, and report to the outside world. For instance, a smart wall could notice intruders or cancel out surrounding noise. 

COMPUTERS THAT MUST BUILD THEMSELVES

Intelligent materials are only one example of the vast technological vistas opened by imitating biology. One way to become more effective is to get smaller—a truth in computing, medicine, and building materials, as well as, alas, warfare. Hence, the tremendous interest in nanotechnology—technology at the scale of atoms and molecules. The difficulty of nanotechnology is that when you get down to a certain size you no longer can use traditional technologies and tools. You cannot nail two atoms together because hammers and nails are themselves collections of atoms. Atoms establish chemical bonds with one another, and the only way to build on the nano scale is to trick the atoms and molecules to bind themselves in the way you desire. This is called molecular self-assembly. 

One way to become more effective is to get smaller—a truth in computing, medicine, and building materials, as well as, alas, warfare. Hence, the tremendous interest in nanotechnology—technology at the scale of atoms and molecules. The difficulty of nanotechnology is that when you get down to a certain size you no longer can use traditional technologies and tools. 

One ambitious subfield of molecular self-assembly is DNA self-assembly. Forbes explains how DNA self-assembly can be adapted to the needs of computation. DNA—our genetic code—can be seen as a string of symbols, and a problem in computation can be understood as finding the correct output string of symbols (in this case, of DNA) when given a certain input string (or slice). Because each slice of DNA only combines with certain other slices of DNA, you can appropriately encode a computational problem in terms of DNA slices, and the DNA is already engineered by nature to find the solution for you. 

When is this useful? DNA computation helps a lot in what are called “combinatorial problems,” which require exploring myriad possible combinations of symbols. For example, if you want to know whether there is a way to travel through a large number of cities without ever visiting any site more than once, you may have to try many different routes. In ordinary, serial computers, you have to try one route after another. But if you encode each route in slices of DNA, all slices are going to act at once, which is like trying all routes at once. The great speed gained by this combination of nano-scale and parallelism is the strength of DNA computing. 

There are limits to DNA computation, Forbes reminds us. It has solved small-scale problems but has not been scaled-up to larger problems. There will always be a limit to how large a problem it can handle because the amount of DNA that is needed increases exponentially with the size of the problem. Even medium-scale problems require the painstaking preparation of DNA slices by expert technicians, and the results of DNA computation often involve a significant amount of error. Generally, biomaterials such as DNA are still too unstable, fragile, and hard to interface with ordinary materials (such as metal, glass, or ceramic) to be effectively used in computing. 

CELLS, SELF-REPLICATION, AND “ALIFE”

Even before DNA’s structure was deciphered, von Neumann’s biological interests led him from the brain to the cell. Cells have all the information they need to carry out their tasks, including reproducing themselves. Von Neumann developed a mathematical theory of self-reproduction by creating a cellular automaton. Consider an infinite checkerboard of elements called “cells.” Each cell can take a finite number of internal states. At any given time, a cell’s behavior, or state, depends on the state of its neighboring cells. Now, consider the patterns of cell states on the grid as the inputs and outputs of a computation. With the right rules and internal states, these cells can create copies of themselves on the grid: They can self-replicate. A similar principle is behind computer viruses—programs that install themselves in a computer and send copies of themselves to other machines.

Researchers have created artificial worlds populated with artificial creatures—creatures that are computer programs. Forbes credits biology, and specifically evolution, sexual reproduction, and natural selection, with inspiring the computer programming that makes evolution of ALife possible. 

Another application of mathematical self-replication is artificial life, or ALife. Forbes quotes from the first ALife workshop, in 1987, a definition of artificial life as “the study of artificial systems that exhibit behavior characteristic of natural living systems.” Researchers have created artificial worlds populated with artificial creatures— creatures that are computer programs. Forbes credits biology, and specifically evolution, sexual reproduction, and natural selection, with inspiring the computer programming that makes evolution of ALife possible. Evolutionary programming is getting computer programs to solve problems by letting the programs evolve like organisms. You start with a first generation of ALife creatures (programs) that are more or less adept at solving a given problem—say, playing checkers. Then, you make the ALife creatures reproduce by means of intermixing their codes and making copies of themselves. Sometimes mutations occur. You discover by means of testing which offspring programs are better at solving the problem, and you let those programs reproduce. After many generations, you may get programs that are much better checkers players than were their ancestors. 

But would it not be simpler just to program the computer from the beginning to solve the problem? Sometimes yes, sometimes no. For computational problems that involve many variables and many constraints on a solution, such as playing checkers or designing industrial plants, it may be tough or impossible to figure out the best solution from the start. By letting programs reproduce and adapt over many generations, sometimes it is possible to obtain better programs than by just sitting in a chair and writing code. 

In the past few decades, evolutionary programming has emerged as a powerful method for tackling large-scale optimization problems. An example is Tierra, the virtual world created by Tom Ray, Ph.D. Tierra started with only one creature, called Ancestor. Ancestor soon multiplied into different organisms, each with its own behavior and computer code. After many generations, Tierra contained more than 350 types of organisms, 93 of which had populations of five or more individuals. Many aspects of natural evolution emerged on Tierra, from communities of genetically uniform organisms to parasites exploiting their hosts. But the evolutionary process in Tierra takes only a few days. This is an example of what biologists cannot do, as yet, but ALife researchers can. Forbes stresses the power of ALife, quoting John Casti, Ph.D., of the Santa Fe Institute: 

I think ALife will ultimately enable us to properly understand evolution and the workings of cellular machinery mostly because it will offer us the chance to do the kinds of experiments that the scientific method says we must do—but cannot with the time and/or spatial scales of material structures like cells themselves. 

SO, IS EVERYTHING COMPUTATION?

Some people think that ALife is really alive, at least in part because they believe life— including bio-life—is just a form of computation. Some even think that everything is computation. Notice that this theory exacts a price. It deprives of any meaning, any bite, the idea that thinking (or life) is computation. If everything is computation, then certainly life is computation, too, and so is thinking. But this no longer tells us anything specific about how thinking or life are possible; it just asserts that everything is computation, and thinking and life are part of everything, which we already knew. 

For that matter, is the idea that everything is computation even intelligible? 

Computer programs, including cellular automata, are not the sorts of thing that can run “by themselves”; they need hardware to run on. Computer programs run on computer processors, the mind may run on the brain, and life may run on cellular hardware. No hardware, no software. 

Computer programs, including cellular automata, are not the sorts of thing that can run “by themselves”; they need hardware to run on. Computer programs run on computer processors, the mind may run on the brain, and life may run on cellular hardware. No hardware, no software. Except now we are being told that everything is software. Exactly what is this software running on? After all, on this notion, even your computer motherboard and keyboard are supposed to be computer programs. So are their components all the way down to subatomic particles? It becomes difficult to fathom what the idea even means. 

I see no reason to abandon our ordinary way of thinking. Physical processes are what they are, with all their physical properties. Life is a physical process, characterized by the complex organization of large molecules. And thinking is a kind of biological process, which occurs when certain kinds of cells—neurons—interact appropriately. Perhaps all there is to thinking is the manipulation of internal symbols according to rules. If so, then thoughts can be reproduced on a computer, for that is what computation is: the manipulation of symbols according to appropriate rules. 

Perhaps, though, there is more to thinking than that. Certainly, there is more to biological life than the manipulation of symbols; organisms are cohesive entities that grow, persist, and repair themselves, then die and decay. ALife creatures have a lot in common with biological organisms, including the ability to reproduce, but they are not the same. They are not cohesive entities that move around in space and time; they do not eat, smell, or hear any more than you can catch real fish in a computer model of a lake. Taking the computer-biology analogy too far begins to yield rapidly diminishing returns. 

As Forbes makes explicit in her preface, many topics she covers, from neural networks to cellular automata, are dealt with in other books. Some are not, however. In Imitation of Life Forbes provides an accessible survey of all these fields. If you want a relatively brisk tour of the entire landscape that lies today at the intersection of biology and computing, this book is it. But the price of breadth is that explanations are often superficial and sometimes unclear, and there are some mistakes.  

EXCERPT

From Imitation of Life: How Biology is Inspiring Computing by Nancy Forbes. © Nancy Forbes. Reprinted with permission of MIT Press.

HARNESSING THE SUN’S RAYS IN ARIZONA Researchers at Arizona State University are not only developing their own version of biomolecular hardware, but have also established a multidisciplinary program of research and graduate study in this field, with the help of the National Science Foundation. The program integrates biology, biophysics, chemistry, and engineering, with a focus on biomolecular devices, both natural and manmade, and offers training in biohybrid circuits, light-powered molecular engines, DNA synthesis and repair, and protein engineering. Arizona State University (ASU) has been a well-known center of research activity in photosynthesis for over fifteen years, with a large group of faculty from the life and physical sciences engaged in this work. “We came to the point,” says ASU scientist Neil Woodbury, “where it was clear to us that the paradigms we were learning in the study of photosynthesis were suited to a large number of electronic device applications. We wanted to work on these problems, so starting a program with students, classes, and seminars seemed like a good vehicle. Furthermore, the idea captured our imagination.” 

Photosynthesis—the process that occurs in a green plant when it synthesizes organic compounds from carbon dioxide and water in its environment in the presence of light—can be viewed as a light-activated power source in nature. It involves several processes that can be co-opted for electronics. For example, it happens very fast (roughly a millionth of a millionth of a second, compared with computer clock times that are in the order of a billionth of a second), and is switched on by light, suggesting the possibility of photosynthesis-powered optical logic gates —a true organic computer! 

Arizona State University researchers Devens Gust, Tom and Ana Moore, and Gali Steinberg-Yfrach have studied the activities of the photosynthetic “reaction center,” a center where photosynthetic light produces energy. They realized that the process could be a microscopic solar power pack, and so set about trying to create a synthetic version. Their work employs water suspensions of liposomes, which are tiny hollow spheres with double-layered lipid walls, resembling cell membranes. The group has learned how to add synthetic compounds to these walls, causing them to act just like the photosynthetic membranes inside plant cells. The liposomes are then able to capture sunlight electrochemically, using it to generate energy. “These artificial systems,” says Gust, “may be useful for powering small man-made machines that couple light energy to biosynthesis, transport, signaling, and mechanical processes.” 

However, once they’ve made these photovoltaic devices, researchers must still find ways of interfacing them with electronic circuits. There are several processes that could serve as candidates for this task: absorbing or emitting photons, heat intake or loss, or electrical signals from the movement of electrons. So far, however, they have found no reliable form for the interface, and, says Neil Woodbury, “No one says it’s going to be easy.” 

Another effort at ASU’s biomolecular center is more directly related to transistor applications. Working with Michael Kozicki, a professor in the Electrical Engineering Department, Gust has developed a hybrid device that uses surface monolayers of light-activated molecules to control the current flow in a transistor. Kozicki strongly believes that the first practical applications of these molecular electronic systems will require a novel architecture different from that used in current silicon transistors. Closer scrutiny of biological systems will help guide these efforts, says Kozicki. “I’m the first to admit that biology is the best teacher here.”



About Cerebrum

Bill Glovin, editor
Carolyn Asbury, Ph.D., consultant

Scientific Advisory Board
Joseph T. Coyle, M.D., Harvard Medical School
Kay Redfield Jamison, Ph.D., The Johns Hopkins University School of Medicine
Pierre J. Magistretti, M.D., Ph.D., University of Lausanne Medical School and Hospital
Robert Malenka, M.D., Ph.D., Stanford University School of Medicine
Bruce S. McEwen, Ph.D., The Rockefeller University
Donald Price, M.D., The Johns Hopkins University School of Medicine

Do you have a comment or question about something you've read in CerebrumContact Cerebrum Now.