Unification of Science and Spirit -- Copyright Ben Goertzel © 1996

Back to Unification of Science and Spirit Contents

Chapter 5

THE COMPLEX, CHAOTIC WORLD

    Western philosophy and science have tended to focus on annamaya, the world out there -- the objective world, solid, definite, and inanimate. Non-Western cultures, on the other hand, have generally had more of an inclination to consider the world as alive. That is, they have focused more on the pranamaya level of being.

    But today, things are changing. Western thought is finally catching up with its prececessors, and coming to appreciate the subtle, complex, living nature of the outside world. Environmental sensitivity is in, along with sensitivity to the needs of one's body. It is increasingly being rediscovered that, rather than being minds navigating bodies through an inert world, we are actually embedded in a complex ecology of intelligences. The body itself is a subtle, self-organizing intelligent system, as is the ecosystem in which the mind/body lives. Human beings, and human minds, are just part of a larger matrix of intelligence.

    Computer technology might, at first glance, seem to be opposed to this increasing acceptance of the aliveness and intelligence of the world. After all, computers take us away from our bodies, and away from the world. Monitors irritate the eyes, keyboards give us repetitive motion injuries, and computer technology as a whole relies on a vast and environmentally destructive manufacturing industry. Computers are "nerdy," appealing to precisely those individuals who are most uncomfortable with their bodies and with social interaction.

    In the realm of science, however, the effect of computers has overwhelmingly been to work against the reductionist, mechanistic world-view, against the isolation of mind from body, and the isolation of body from environment. In one area after another, computer technology has served to demonstrate to researchers the subtlety, wisdom, and intelligence of natural systems.

    Before computers, scientists had to rely on logical thought and mathematical analysis to construct their theories and experiments. The restrictions of the human intellect forced them to work with highly simplified models of natural systems. These simplified models, in most cases, ignored precisely those aspects of natural systems that make these systems alive. With computer simulations, however, scientists are free to put the life back into their models. No matter if the complex, subtle interations are too much for human logical reasoning or mathematical computation to handle -- the computer can handle it!

    This is the story of complexity science. Complexity science is a very exciting, high-tech endeavor, involving the latestdevelopments in mathematics, experimental methodology and computer technology. At bottom, however, its essence is simple: to look at systems as wholes, to focus on interdependence rather than independent behavior of parts. In this sense, complexity science is a return to the intuitions underlying non-Western scientific traditions, such as Chinese science, witchcraft, and shamanism. Computer simulations allow scientists to focus on the big picture, instead of breaking things down into parts. Complexity science enables us to incorporate aspects of the wisdom of the past into the science of the future.

    ***

    One way to understand the meaning of complexity science is to think about the difference between mainstream modern, scientific medicine and the traditional medicine of other cultures. Mainstream modern medical science is based on a reductionist approach: one identifies the functions of particular body systems, and studies these systems in relative isolation. One tries, as much as possible, to work around the interconnectedness and interdependence of body systems. For after all, the more systems one tries to consider at a time, the more difficult it becomes to structure scientific experiments.     The reductionistic approach to medicine has led to a "pill for every ill" philosophy which ignores the effects of diet, attitude and stress on health, and minimizes the importance of the side-effects of medications. This single-system approach has led to many triumphs. But, even so, the medical establishment is losing more and more customers each year, as people become increasingly interested in various "naturalistic" and "holistic" therapies.

    Some of these holistic therapies are remarkably effective; others, no doubt, are pure chicanery. I, personally, tend to be even more critical of holistic medicine than I am of conventional "pill for an ill" medicine. If I have a medical problem, I will go to a doctor first, and then, if the remedy offered seems dangerous or ill-founded, I will consider other methods. But the point here is that, unlike conventional medical science, holistic therapies respect the interconnectedness of body systems -- the complexity of the body. For instance, a nutrition therapist might cure an ear infection by recommending a change in diet. Or an acupuncturist might cure a headache by inserting needles in areas of the body distant from the head. This represents a fundamentally different way of understanding the body than one finds in conventional medical science.

    Holistic medical therapies tend to be founded on anecdotal evidence, obtained in the course of everyday life, rather than on scientific evidence obtained in controlled studies. To an extent, this is a matter of culture: the people who are interested in alternative medicines are not the same ones who care about scientific evidence. And it is a matter of politics: there is little funding for scientific study of unusual therapies. But there are also serious practical obstacles in the way of studying alternative medical therapies in a rigorous way. For instance, dietary remedies are commonly understood to vary greatly in effectiveness from one person to another. A foodwhich helps one person may actually hurt another. This is perfectly reasonable, as each of us has our own unique body chemistry -- but it makes research even more difficult than it would be otherwise.

    Where does complexity science come in? Complexity science methods allow us to understand exactly what kinds of high-level self-organizing phenomena the successful holistic medical therapies are making use of. As yet, this aspect of complexity science is only at a very primitive stage: we do not yet have a complexity-theoretic understanding of holistic medicine. But we are beginning to put together the elements of such an understanding.

    To take a single example, which will be pursued in detail below, holistic practitioners have long spoken of the "wisdom of the body", the innate intelligence of body systems. And recent computer simulations suggest that the immune system, itself, is highly intelligent. It is a complex pattern-recognition and problem-solving system, similar in many ways to the brain. Furthermore, this immune intelligence is continually in communication with the brain intelligence, leading to the new scientific area of "psychoneuroimmunology," a fancy word for the effects of stress, attitude and so forth on resistance to illness.

    Other examples abound. For instance, the complexity of metabolic processes is only now becoming understood. The Belousov-Zhabotinsky reaction, which is the standard example of spatiotemporal chaos and chemical self-organization, was originated as a model of the Krebs cycle, the process by which organisms break down food into energy and carbon dioxide. Metabolic processes, it seems, move through spontaneous nonlinear oscillations, and weave complex spatial patterns such as spirals. The effect of all this complexity on human metabolism is a subject of current investigation, both on the computer and in the test tube. What is the use of this oscillation, these spirals and other forms, inside our bodies as we metabolize food? The point is that we are reaching up toward an understanding of the subtlety of digestion.

    Holistic medicine speaks a great deal about metabolism and digestion, from an entirely different angle. The effects of different foods on different parts of the mind and body are described in detail, based on personal observation rather than controlled experiment. Ordinary science gives no way of understanding how these effects might happen, but complexity science is moving toward an understanding of the subtleties of metabolic process.

    Yogic food science advocates the eating of raw, uncooked foods, and as justification speaks of the prana or life-force contained in them. This prana is destroyed by cooking, and by other varieties of food preparation such as freezing and canning. On the face of it, the concept of prana in food has no scientific meaning. But when one thinks in complexity science terms, things are not quite so clear. Prana is not any one component of the food, but perhaps it is some emergent quality that comes about as a combination of all the elements of the food. Complexity-theoretic biology teaches us that living organisms are self-producing, "autopoietic" systems, in which each element produceseach other element: the whole system is richly interconnected. What the yogic tradition refers to as food prana could be an emergent property of this autopoietic system. In the words of the Archbishop of York John Habgood, who is also a physiologist,

    Life is not some sort of essence added to a physico-chemical system, but neither can it simply be described in ordinary physico-chemical terms. It is an emergent property which manifests itself when physico-chemical systems are organized and interact in particular ways.

Carrying this train of thought a little further, a scientific way of formulating the yogic notion of prana in food would be as follows: "The same interdependences by which the different parts of a plant or animal produce each other, are useful in the digestive process as well. Eating whole foods allows the different parts of the food to help digest and break down each other." I am not saying that this has been proven -- it is just an hypothesis. But the mere fact of casting the yogic ideas in scientific language in the form of scientific hypotheses is worth something, and opens the door to experimental investigation of these ancient and intriguing ideas.

    I have been speaking about medicine and the body. But what I have said for the body, goes for other complex biological, psychological and social systems as well. These systems are integrated wholes, the dynamics of which are dominated by emergent properties. Each part exists to serve the other parts, and is in most cases constructed from the other parts. Each such system is a web of pattern -- a web of interdefined, interemergent entities, which all come out of each other and explain each other. Complexity science -- the science of the future -- is a science of webs of pattern. It is a science of pranamaya.

     THE NEW SCIENCE OF COMPLEXITY

    Many of the basic concepts of complexity science date back to the mid-century Cybernetics and General Systems Theory movement. This movement produced some very interesting ideas, in a variety of disciplines, but it never lived up to its own advertisements, and eventually it became a bit of a laughingstock. What today goes by the labels "chaos" and "complexity" can be viewed as a new, improved version of General Systems Theory. Empowered by advanced computing technology, scientists are arriving at the kinds of results that early general systems theorists only dreamed about.

    Complexity science -- like all newly emerging branches of science -- is messy and multifaceted. Its concepts are not simple to understand. But they are well worth the effort required. For sheer intellectual drama, there is little to compare with the spectacle of science finally coming to grips with the aliveness of the world -- with the world as it is subjectively perceived, understood and created, and as non-Western cultures have long understood it.

    ***

    Modern complexity scientists have not abandoned the classical methods of science and mathematics -- they still make use of formulas and theorems, and laboratory experiments in controlled environments. But they supplant the results of these calculations and experiments with qualitative observations of whole systems, and with carefully engineered computer simulations. Computer simulations allow one to create and study systems that are simpler than real-world complex systems, yet more complex than those systems one can manipulate in the laboratory, or study through mathematical analysis. In the happiest cases at least, they form a bridge between the reductionist scientific method and the holistic behavior of complex systems.

    The advent of complexity science is more an evolution than a revolution: it is not a matter of one or two or ten or twenty big discoveries, but more a matter of a thoroughgoing gradual alteration in the way scientists think and work. Many popular commentators have failed to understand this. For instance, a 1995 article in Scientific American, entitled "Is Complexity a Sham?", missed the point entirely. The article points out how a few of the supposed achievements of complexity science have been oversold, and argues that the Santa Fe Institute, a center for complex systems research, does not live up to its reputation. Indeed, many of us who work in the field have had the same reaction to the Santa Fe Institute, an organization which has received a tremendous amount of publicity despite a fairly modest output of scientific results. But this is a result of the way the mass media works and has nothing to do with the nature of complexity science. The fact is that the vast majority of complex systems oriented research is done in ordinary labs and universities, published in ordinary scientific journals, and never mentions the word "complexity." Every time a scientist uses a computer to simulate an holistic, large-scale phenomenon that would be impractical to study otherwise -- this is an example of complexity science. The more strongly the work focuses on emergent, holistic phenomena, the more complexity-oriented the work is. Research done at complexity centers like the Santa Fe Institute is significantly more complexity-oriented than average, but it is just one end of a continuum.

    Some people have talked a lot about the possibility of "unifying laws" of complex systems. In fact, it is far too early to expect general laws of complex system behavior to emerge -- if indeed they ever do emerge, and complexity science does not bring about a drastic revision of our concept of "scientific law." Complexity science is still in an early stage of development, in the stage of data collection and exploratory theorizing. It may be more appropriate to think of it, not as a new science in itself, but rather as a new phase of science, a new attitude and conceptual toolbox underlying and subtly transforming all branches of human inquiry. What complexity science seems most likely to lead to is, not new laws, but rathertwo things: new heuristic principles for understanding systems, and new analytical techniques for getting at the special-case "laws" underlying particular types of systems. The concept of a general law may fade, replaced by higher-level regularities, metalaws and metametalaws. As the science of complexity develops, so will the complexity of science.

    ***

    To truly appreciate the revolutionary nature of complexity science, one must understand the type of scientific methodology that preceded it. I remember, in ninth grade science class, getting a lecture on how science works. The story went something like this: "The most important step in the evolution of modern science was the development of the experimental method. To design a good scientific experiment, one sets up a carefully controlled environment, in which everything is constant except the small set of factors that one wants to study. Then one figures out a way to manipulate these factors, so as to determine the laws that govern their behavior. The theorist's job is to determine general laws or formulas which explain the tables of numbers obtained by the experimental scientist."

    And, sometime during my first year of university, I got a similar lecture from my calculus teacher on the workings of mathematics. "Mathematics is the study of necessary truths. It is the deduction of theorems from axioms using precisely specified logical rules. Where the axioms come from doesn't matter, nor does the 'meaning' of the logical rules."

    These are tidy and elegant pictures. But, as complexity science shows, they are not entirely accurate! As we move toward the twenty-first century, the shortcomings of these classical views are becoming more and more apparent.

    Not all real-world phenomena can be studied in carefully controlled environments. In particular, the reductionistic perspective runs into severe difficulties when applied to living systems. No real organism lives in a carefully controlled environment; and no real organism is a static, carefully controlled system. In making the transition from living reality to the laboratory, precision is gained, but something else, something equally important, is lost. What is lost is complexity, or in other words, those properties of whole systems that cannot be easily reduced to properties of the parts. Examples of such holistic properties abound in everyday life -- four which spring to mind are states of mind, social and cultural trends, global weather dynamics, and stability of ecosystems. Medical science, as discussed above, is another powerful example. The body is not carefully controlled, it is subject to all kinds of strange fluctuations and outside influences, but it is precisely the nature of these fluctuations and the reactions to these influences that we want to understand!

    Similarly, not all interesting mathematical structures are easily amenable to rigorous deductive analysis. Some mathematical systems, which are very pertinent to the study of real-world phenomena, are much more easily studied by computer simulation. Among the many examples of this are mathematicalstructures which model complex systems, such as cellular automata, nonlinear dynamical systems, fractals, and genetic algorithms. Computer simulation of mathematical processes does not yield rigorous proofs; it is a kind of "experimental mathematics." Some would say that it is not mathematics at all. But it is has, to a close approximation, the same kind of interest and applications that ordinary mathematics does.

    Complexity science lets us get a direct understanding of whole systems -- something that, in the past, was tacitly assumed to be out of reach. It gives us glimpses of the way in which wholes remain wholes, in which they maintain their integrity and coherence in the face of microscopic flux. And this is a very important thing. For, according to the Perennial Philosophy, the universe itself is made of processes, eternally dancing around and through each other in inter-transforming fluctuation. The concrete objects, ideas, feelings that we perceive in the world are all systems of processes, which hold themselves together in much the same manner as the natural systems studied by complexity scientists. In its focus on wholes, complexity science brings science one step closer toward spirituality.

     Complexity and Emergence

    Complexity science is part of the computer revolution -- it depends integrally upon the use of computer simulations. But it may also be understood in a different historical light -- as part of the twentieth-century movement away from the restrictive ideas of nineteenth-century Newtonian physics. Modern physics explicitly contradicts Newtonian physics. It gives the right answers, where classical physics gives the wrong ones. Chaos and complexity are, on the other hand, consistent with the letter of nineteenth-century physics. But they violate the spirit of nineteenth-century science every bit as flamboyantly as quantum physics does.

    Classical physics holds that the universe is a giant machine. In this language, what chaos theory shows is that most machines are insanely difficult to predict, so difficult that their behavior appears almost random. And what complexity science shows is that many machines display their own ideosyncratic, holistic behavior. This holistic behavior is intricately structured but yet nearly impossible to predict from the behavior of the machine's component parts.

    Thus, what chaos and complexity show is that the mechanistic universe, as well as not being real, was never all it was cracked up to be. The vision of the "clockwork universe" was misguided even in the context of classical physics.

    According to chaos mathematics, even simple systems like pendulums are not really predictable like clockwork. A pendulum, when it is not swinging very far, will oscillate in a chaotic way, a way that is statistically indistinguishable from random behavior. In a chaotic system, simple rigid rules act over and over again, and after a period of time they lead to behavior that appears totally or partly random. In a complex system, the simple rules lead to behavior that seems random at first, but oncareful scrutiny yields subtle underlying patterns.

    Many systems in the body seem to be complex or chaotic. For instance, the human heartbeat is chaotic. The beating of the heart is caused by well-known physiological mechanisms. But what these mechanisms do is to cause the heartbeat to fluctuate in a semi-periodic, but partly random way -- because this, in fact, promotes the optimal health of the organism. On the other hand, as will be discussed in detail in later chapters, the human brain is chaotic and complex. Its behavior is so complex as to appear random at first -- but careful inspection reveals subtle underlying patterns, which cannot be derived from the mechanical interactions of brain cells in any practical way.

    Many complex systems seem to operate on the "edge of chaos," in the sense that, if they were perturbed a little bit, they might sink into absolute chaos, or else into overly repetitive, periodic behavior. When they need a little innovation, they perturb themselves in such a way as to dip into chaotic dynamics. When they need stabilization, they perturb themselves in such a way as to move toward periodic behavior. This "edge of chaos" behavior is an example of the kind of high-level emergent pattern observed in complexity science. It is a general characteristic observed in many complex systems, which is somewhat difficult to quantify and to rigorously validate, but which is nevertheless definitely, intuitively there.

    ***

    The "edge of chaos" is by no means the only, or most convincing, example of an high-level emergent pattern, spanning different complex systems. A better-established and longer-recognized emergent pattern is hierarchy. Nearly every complex system seems to display some kind of hierarchical structure, whereby higher levels set the boundary conditions for lower levels, and lower levels determine the properties of the individual constituents of higher levels. Hierarchy seems to be an extremely efficient way of organizing complex networks of processes -- a notion that resonates very nicely with the hierarchical structure of the universe, as viewed in wisdom traditions. In Chapter 6 we will see that the hierarchical structure of the brain leads into the hierarchical structure of the mind, which itself leads into the Vedantic hierarchy of the universe as a whole.

    Similarly, nearly every complex system, when it needs to create new structures, seems to rely on some kind of evolutionary dynamic -- some dynamic involving processes of mutation or "sexual" reproduction, followed by processes of loosely "fitness"-based selection. Hierarchical structure and evolutionary dynamics can both be seen in the brain, in the mind, in ecosystems, and in organisms. All of these examples, along with the edge of chaos, will be returned to and discussed in detail in later chapters.

    The general notion of an high-level emergent pattern, spanning different types of systems, is a very important one. It gives us, among other things, the key to interpreting chaos and complexity in terms of the hierarchy of being. Whatcomplexity science teaches us is, quite simply, that dealing with living systems on the level of physical reality alone is hopeless. If we want to understand the behavior of complex physical systems, we have to be willing to look for complex, emergent, holistic patterns -- abstract, loosely defined structural and dynamic relationships among various system. These holistic patterns are not, themselves, physical entities, nor are they easy to define with the crispness expected in physical science -- but they are there, and they bind together areas of science that were previously considered quite distinct.

    The regularities being detected by complexity science researchers are different from the regularities dealt with in normal science. They are not as precise, not as easy to quantify, or to label. But they are no less vivid and real. They can be felt by human observers, and detected in computer simulations. They infect science with a dose of the subtle, emergent connecting patterns typical of anandamaya or vignanamaya.

     Key Concepts of Complexity Science

    Soon I will turn to specifics -- to the ways complexity science has been used to understand specific living systems like the immune system, the brain and the ecosystem. First, however, a brief review of basic concepts will be necessary.

    At the core of complexity science is the notion of dynamical system. Dynamical systems are simply systems that change over time. In practice, what one studies in dynamics are systems whose change over time are governed by fairly simple equations. In the past, one was limited to equations that could be solved "by hand" -- mathematically. Now, however, we are able to solve all sorts of equations by computer, and so the scope of dynamical systems theory has considerably broadened. I will summarize the lessons of this kind of computer experimentation with three key concepts: the "a-words," attractor, adaptation, and autopoiesis.

    These are technical concepts, but in the end, they reflect deep philosophical ideas. Attractors are a way of reconciling processes and structures: they are structures that rise out of processes. Adaptation and autopoiesis are opposing forces of novelty and conservation. One is responsible for creating new forms, the other is responsible for retaining forms that have been created. Put together, attraction, adaptation and autopoiesis form a picture of a universe that is concerned with the creation and destruction of fluctuating processes that give rise to an emergent level of structures. This is intuitively quite resonant with the Perennial Philosophy -- but here we are talking about computer simulations rather than mystical insights!

    ***

    An attractor, first of all, is a certain behavior of a system which gives the impression of having "magnetic power" over other behaviors of the system. Once the system is following theattracting behavior, it is guaranteed to keep on following that behavior until the end of time. And if the system is following some other behavior which is reasonably similar to the attracting behavior, it is bound to eventually wind up following the attracting behavior, or something very, very close to it. (Strictly speaking, only deterministic systems display attractors; real systems always have a random aspect and thus display only "probabilistic attractors" which have a small but definite chance of being escaped. But the basic idea is the same.)

    One kind of attractor is a fixed point, which is a constant, steady state, or in physical terms, an equilibrium. It is a situation where a system keeps on doing the same thing, over and over again. For instance, once a pendulum stops swinging, and hangs limply at the bottom of its arc, it has reached a fixed point. Once a train of thought reaches an end, having arrived at a constant, definite conclusion, it has reached a fixed point.

    Another kind of attractor is a limit cycle, which is a periodic motion, oscillating around and around forever. Imagine a comet, zooming in toward the sun from another galaxy, and ultimately captured by the sun. When it becomes entrained in orbit around the sun, it has reached a limit cycle attractor. Periodic attractors are very common in biological systems -- circadian rhythms, sleep/wake cycles, hormonal cycles, and so forth.

    Fixed points and limit cycles have been known for hundreds of years. What was new with the introduction of computer simulations was the discovery of other attractors. Simulating simple dynamical systems on their computers, scientists found that systems often "locked into" strange but comprehensible patterns of behavior. The systems weren't repeating, they weren't oscillating -- they were wandering about, but in vaguely predictable ways. Their behaviors, graphed in appropriate diagrams, sketched out intriguing fractal pictures, rather than single points, circles, or random blurs. These peculiar characteristic behaviors were called strange attractors.

    Often the behavior of a system, after it locked into a "strange attractor" behavior, was chaotic, in the sense that you couldn't predict exactly what it was going to do. You could tell what it was going to do in some approximate sense -- it was going to stay within the strange attractor. But you couldn't make exact predictions.

    Many systems that had typically been thought of as "periodic plus noise" turned out to be chaotic. For instance, the human heartbeat is not periodic; it follows a strange attractor. People with heart disease have overly orderly, periodic heartbeats; healthy people's heartbeats are more chaotic. The behavior of individual ants in an ant colony is chaotic, rather than random or easily predictable. The behavior of the whole colony, however, is highly structured and much more predictable.     Examples in physics and chemistry abound: weather systems, lasers, organic and inorganic chemical reactions, pendula, etc. One system after another, which had previously been understood as basically periodic or basically random, turned out to display interesting "strange attractor" behaviors.

    In psychology, however, things have not proved so simple. Strange attractors of all sorts have been found in computer simulations of the brain; and Walter Freeman has found good evidence for strange attractors in the olfactory cortex of the rabbit (the part of the rabbit brain that deals with the sense of smell). Strange attractors and chaos have been reported in human mood fluctuations. However, although many theorists have suggested a fundamental role for strange attractors in mind/brain dynamics, this conjecture has not been conclusively demonstrated. The problem is that psychological data sets are much smaller and messier than one finds in physical science, so that many of the tools of dynamical systems theory are simply not applicable.

    There is a minor intellectual industry in the study of routes to chaos. As one changes the parameters of a system, so as to move the system gradually towards chaotic behavior, one can see the system move through stages of more and more "chaos-like" behavior. And, remarkably enough, a huge variety of different systems seem to display the same few methods of moving through stages toward chaos. The first such archetypal path toward chaos to be discovered was Mitchell Feigenbaum's "period-doubling route," in which a system displays oscillates faster and faster and faster, the period of its oscillation doubling again and again, until it ultimately "snaps" into chaos, and starts following a strange attractor.

    Finally, at the forefront of dynamical systems theory, there are various attempts to use strange attractors to classify and categorize systems. Strange attractors are not all alike; they all have different shapes. Systems whose attractors have similar shapes are in some sense "behaviorally similar" -- they are similar at the emergent level, even though they may be totally different otherwise (e.g. one may be a brain component, another an ecosystem).

    The concept of attractor has revolutionized science, and for good reason: it is not merely a technical concept, it has deep philosophical grounding. In Western terms, we may say it is a reconciliation of Heraclitus and Plato. Heraclitus said everything is change; Plato viewed everything in terms of abstract, ideal structures. Attractors are abstract, ideas structures that emerge from change, structural patterns floating on top of a realm of fluctuating dynamics. They of course do not solve the ontological puzzles of being and becoming -- but they illustrate the overcoming of these puzzles in a dramatically practical way.

    ***

    Attraction is a very general idea; it applies to all sorts of different dynamics. Certain kinds of dynamical laws, however, seem to recur in complex systems. Foremost among these is the evolutionary law. Evolutionary adaptation seems to be ubiquitous in complex systems. It is no exaggeration to say that evolution is the intelligence of pranamaya.

    The basic logic of natural selection is a simple one. All one needs, in the simplest case, is a collection of entities that can reproduce themselves, sexually or asexually. Then one needs an environment that forces a "selection" between entities -- sothat some entities are more likely to survive than others. Over time, this leads to the production of a population of entities that are adapted to their environment.

    This basic logic pops up again and again and again, in contexts apparently far removed from Darwin's evolution of organisms and species.    For instance, natural selection is the essence of Burnet's clonal selection theory, which is the core of modern theoretical immunology. In this era of AIDS, immunology is constantly in the newspapers, but very few people seem to know that the immune system actually evolves new antibody types by natural selection, every time it is presented with a novel antigen.

    Similarly, John Holland's genetic algorithms, based on natural selection, have proved tremendously effective as a technique for solving practical problems in engineering and computer science. Many people believe that genetic algorithms are the key to the computer programming of the future. Instead of writing a program, one needs only to design objectives that one wants the program to fulfill, and then evolve a population of programs coming closer and closer to fulfilling one's criterion.

    The list goes on and on. Kenneth Boulding has formulated an evolutionary economics. Gerald Edelman has proposed that natural selection even governs the organization of neural pathways in the brain. Chemical evolutionists study the evolution of structure in prebiotic soups of ehemicals, leading ultimately to the emergence of life.

    ***

    Finally, autopoiesis is less familiar than attraction or evolution, but it is by no means less essential. Indeed, the very obscurity of the concept of autopoiesis tells us a lot about Western science. For autopoiesis is nothing else but a precise formulation of the living nature of complex systems. Like evolution, autopoiesis is a particular kind of dynamic that is astoundingly common in real-world complex systems.

    Technically, "autopoiesis," the coinage of biologist Humberto Maturana, refers to the ability of complex systems to produce themselves. Autopoiesis is closely related to the more familiar concept of "self-organization," but there is a difference: self-organizing systems need only rearrange themselves, while autopoietic systems must produce themselves.     The paradigm example of autopoiesis is the biological organism -- the body! A body consists of a collection of interconnected parts precisely designed so as to be able to support and produce each other. Another example, just as apt, is the modern economy. No individual is self-sufficient, and only a small percentage are truly parasitic; directly or indirectly most individuals rely for their lifestyle on the actions of most other individuals. Yet another example, which I have treated in detail in Chaotic Logic, is the belief system. No one who has argued with a fundamentalist Christian will doubt the autopoiesis of belief systems. Every point in thefundamentalist's argument is backed up by at least fourteen other points in her argument!

    Autopoiesis relates with adaptation, in an obvious way. Evolution serves to adapt systems, and thus to "improve" them in various ways, but in most real-world situations it only acts in the context of autopoiesis. Evolutionary adaptations which destroy autopoiesis will not lead to viable structures. On the other hand, evolutionary adaptations which lead to stronger, more stable autopoiesis will tend to lead to structures that survive. This is clearly what we have seen in the evolution of organisms.

    And autopoiesis also relates to attraction, in the sense that an autopoietic system is, itself, often an attractor. The process of evolving an autopoietic system is a process of autopoietic attraction. Starting from a certain initial population of systems, evolution makes variations by its operations of mutation and sexual reproduction. Some of these variations are successful, some or not. Gradually, the successful ones begin to lead along a path -- a path toward an attractor of the evolutionary dynamic, which represents a new autopoietic system. The three a-words all work together.

    ***

    At bottom, taking a more philosophical view, one sees that autopoiesis and evolutionary adaptation are representative of two fundamental forces in the universe.

    Autopoiesis is a force of conservation. It is the means by which structures preserve themselves. An autopoietic system is not a perpetual motion machine, but in a sense it acts like one. It dissipates energy, but it conserves pattern and structure.

    On the other hand, evolution is the means by which new structures come into existence. It is the means by which autopoietic systems form, and just as often, it is the means by which these systems are destroyed.

    Evolution and autopoiesis are opposed to each other, but they also cooperate with each other. It is the give-and-take between conservation and novelty that produces the intricately structured strange attractors that make up our world.

     LIVING INTELLIGENCE

    The principles of physical science, annamaya science, are easily presented in the abstract. The same is not the case with the science of living systems, pranamaya systems, however. Each living system is unique and is, to a certain extent, a law unto itself. In this section I will explore the concepts of complexity science as they reveal themselves in three livingsystems -- the immune system, the brain, and the ecosystem. I will place a special emphasis on the intelligence of these systems.

    Intelligence is usually associated with the brain. In fact, however, the body as a whole is a highly intelligent system, as is the ecosystem in which the body lives. The overall intelligence of the body and ecosystem has long been recognized by non-Western medicine, but is only now coming to the attention of science. There is a continuity between brain-intelligence, body-intelligence and eco-intelligence that cries out for scientific investigation. This continuity is the essence of pranamaya -- it is the link between the physical and mental worlds.

    It would take a lengthy book to do full justice to the intelligence of living systems. Here I will merely scratch the surface -- giving just enough information to indicate that pranamaya, the level of life, has a logic all its own, a logic of self-organization, evolution, and pattern. It is this logic that, with the emergence of complexity science, Western culture is finally beginning to tap into.

     Immune Intelligence

    Most people know that the immune system is the body's police force. It is the "thin line" separating the good guys of our body tissue from the bad guys -- germs, viruses and the like -- invading from outside. Most people, however, have very little idea of the immense intelligence and subtlety that underlies the apparently simple task of protecting the body. Identifying "bad guys" is difficult for police detectives, and is even more difficult for immune cells. The human immune system weighs about a pound (as opposed to three pounds for the brain), and is without doubt the second most intelligent body system.

    The workings of the immune system are still largely unknown. But knowledge is accumulating at a rapid rate, and from what we know today, it is still possible to form a reasonable intuitive picture of how the immune system does what it does. The immune system emerges as a complex network of pattern-recognition processes, achieving its formidable powers of learning and memory by dint of evolution, autopoiesis and large-scale trial and error learning. It is a paradigm case for biological intelligence, for the understanding of pranamaya -- life.

    ***

    Before entering into the subtler aspects of immune function, some simple facts will be required. The immune system is composed of around 1012 cells called lymphocytes and around 1020 molecules called antibodies. The primary purpose of the lymphocytes is to create and then release antibodies; and all the antibodies created by any given lymphocyte are identical (save for possible random

errors). The surface of a lymphocyte is coated with a largenumber of antibodies which it has produced.

    Lymphocytes and antibodies travel throughout the body primarily by way of the bloodstream, entering tissues through capillary walls. They are formed and modified in the bone marrow, the thymus and the spleen; and they periodically pass through the lymphatic vessels, which take in all sorts of cells from the lymphatic capillaries, pass by the lymph nodes (where a large proportion of lymphocytes is always found), and then output back into the bloodstream via the subclavian veins.

    The immune system is in a constant state of flux, creating over a million lymphocytes and a hundred thousand billion antibodies every second. And it is astoundingly various, containing millions of different types of antibodies. To see why so much flux and variety is required, we must naturally ask: what is the function of antibodies?

    The major purpose of the immune system is to protect the body against invaders such as germs and viruses _ the word antigen is used to describe such invaders, and anything else which antibodies attack. The surfaces of most antigens are covered with repetitions of a relatively small number of structures: these structure are "epitopes." The first step in the destruction of an antigen is for a number of antibodies to grab it, to latch onto it; and in order to do this an antibody must itself possess a structure which is to the epitope as a key is to a lock. This corresponding structure, this "key," is called the "paratope" of the antibody. An antibody is shaped like a Y or a T _ something like a lobster. According to this metaphor, its function is to grasp onto epitopes of antigens with its "claws."

    This "lock and key" process does not directly result in the destruction of the antigen; the purpose of the network of lymphocytes is primarily recognitive. Once a number of antibodies are latched onto an antigen, other cells move in to assist in the kill. Often an intricate chain reaction takes place in which various types of complement cells converge on the antigen and consume it. Killer T-cells may also play a role, as may nonspecific cellular agents such as phagocytes. We shall not be concerned with this aspect of immune response here.

    A general schematic description such as this inevitably omits a large number of important details. To see what sort of details are being omitted, let us consider one specific type of antigen: the protein molecule. Protein molecules play many important roles in the body (as enymes and hormones, as the building blocks of cellular membranes, etc.), and they also form the outer layer of viruses and bacteria. Each one consists of a number of polypeptide chains, and each polypeptide chain is a long string of a few hundred amino acids (selected from the set of twenty amino acids) bonded end-to-end.

    A polypeptide chain is a "string" of amino acids, but a protein molecule looks more like a clump of spaghetti. The various polypeptide chains twist around each other according to an intricate process, and an antibody sees only the outside of the formation thus obtained. An epitope of a protein molecule is a region of its surface which is small enough to be latched onto by an antibody; such regions, in general, appear to be arrangements of no more than ten or so amino acids. The epitopeis an extremely sensitive function of the structure of the protein; sometimes the replacement of a single amino acid can lead to a different epitope.

    Protein molecules are essential for immunology not only because they comprise many of the epitopes encountered in practice, but also because antibodies themselves are protein molecules, consisting of four polypeptide chains: two identical "light" chains containing around 200 amino acids, and two identical "heavy" chains containing around 400. All but about 50 of the first 110 amino acid positions on each polypeptide chain are identical for every antibody; these 50 are called "variable" positions. Although most variable positions can only be filled by two different amino acids, there are certain positions called "hot spots" which can be filled by more than two. At the tip of each variable position is a "combining site"; and the paratope of the antibody is simply the set of all its combining sites. Amino acids may not be assigned to the variable positions independently, but it is not clear exactly how constrained these choices are. It is known, however, that polypeptide chains fall into subgroups such that each member of a given subgroup has identical amino acids in some subset of the variable positions.

    There is one further complication here, one which is relatively poorly understood. I said above that lymphocytes produce antibodies. But in fact, lymphocytes fall into two categories, B- and T-cells; and only B-cells necessarily produce antibodies. The others, called T-cells, fall into a number of subcategories. There are killer T-cells, which are helpful in the actual destruction of antigen, to be discussed below. There are memory T-cells, which increase the rapidity of immune memory. And there are helper and suppressor T-cells, which help or suppress the antibody production of B-cells.

    ***

    That the immune system is in essence a learning system is a fairly obvious fact. When the system is presented with a new antigen, an antigen it has never seen before, it has to create a paratope to match the epitope presented by the antigen. Each new antigen presents itself to the immune system as a puzzle, which it must solve using its own resources.

    Definitive proof that the immune system is actually learning is provided by the fact that the system can recognize newly created antigens -- antigens which it could not possibly have learned about through inheritance. The immune system does not contain enough antibodies to have a pre-existing match for any conceivable antigen that might be encountered. There are a lot of antibodies, but not that many. Somehow, the immune system has to actually solve the optimization problem of finding an antibody with a paratope which very nearly "matches" a given epitope. It has to learn how to recognize the patterns on the surfaces ofantigens.

    The key to the immune system's learning process is a deceptively simple mechanism called clonal selection. Essentially, clonal selection is evolution by natural selection. It says that, when confronted with an antigen, the immune system tries out everything it's got. The antibody types that work better are stimulated to reproduce more, on a continual basis. Thus the useful antibody types will preponderate, and the useless ones dwindle.

    According to clonal selection, when members of one antibody type succeed in latching onto an antigen, this is only the beginning of the immune response. When the paratopes of a given lymphocyte are successful at latching onto an epitope of an antigen, the lymphocyte is either suppressed or stimulated. And when a lymphocyte is stimulated, it clones itself. What determines a successful antibody is suppressed and when it is stimulated is largely unknown; however, it is understood to be a function of T-cell behavior.

    If a successful lymphocyte is stimulated, then it clones itself and instead of one successful lymphocyte, there are two successful lymphocytes. And the same process applies to these two lymphocytes: if they are both also successful, they will both be cloned and then there will be four. When the antigen is wiped out, this process will stop, because the lymphocytes will no longer be successful.

    There are certain complications: for instance, only some of the "clones" of a given lymphocyte are true clones which are capable of subsequently cloning themselves; these are called "memory lymphocytes." The others are plasma cells, which cannot clone, but simply produce antibodies full-time; these are responsible for the majority of antibodies. A plasma cell is not a true clone in that it lacks the capacity for reproduction, but it does produce the same antibody type as its parent. A plasma cell is more powerful in the short run, but a memory cell, since it can reproduce itself, is more conducive to long-term power.

    Also, there is an interesting twist involved in the definition of "successful." Clonal selection appears to follow a "threshold logic," meaning that for every lymphocyte there is some number T such that the lymphocyte is cloned only after its antibodies latch onto T epitopes. This is biologically quite sensible: it dictates not only that antibody types which are ineffective at latching onto a given antigen are not reproduced, but also that a certain low level of antigen presence will be tolerated. Mathematically, it means that the equations governing the production of antibodies are extremely nonlinear, and hence not only difficult to analyze but potentially subject to highly complex and even chaotic behavior.

    In spite of all the inevitable biological complexities, however, the essence of clonal selection is quite simple. Clonal selection is the key to immune intelligence -- it allows the immune system to learn. It represents a "trial and error" kind of learning: a lot of things are tried, and when something works,it is replicated. In the end, this is what all the biochemical intricacy comes down to.

    Clonal selection is precisely the kind of strategy that one would expect in a biological systems: simple, robust, and dependent on what, to a human engineer, would look like incredible redundancy. A human engineer, designing an immune system, would be likely to design specialized learning mechanisms based on the physics of molecular bonding and so forth. Nature, instead, built a very simple mechanism, but one requiring a tremendous numbers of cells.

    ***

    One of the largest questions of immunology is something called "self/nonself discrimination." In simple terms: Why doesn't the immune system attack healthy cells of the organism to which it belongs? How does it tell which cells are the good guys, and which are the bad?

    Much to the vexation of transplant surgeons, a person's immune system will attack healthy cells coming from other people. If person A has a patch of the skin of person B grafted onto his arm, the grafted skin will not be taken into the body; it will die because the immune system of person A will treat it as antigen.

    One might suggest that the immune system of each person contains genetic information as to the structure of these cells. But, in fact, if cells from an adult mouse are introduced into the body of a developing fetus, when the fetus grows up it will accept transplants from that particular mouse but not others. Apparently the immune system, at an early stage, learns which cells are its own and which are not, essentially by reasoning that whatever cells are there all the time are its own. This process may also take place during adulthood _ an immune system may learn to interpret something new as part of itself. But this is often troublesome. In order to promote organ transplants, it is now common to introduce agents which temporarily prevent T-cell activity in the region of the new organ. Newly created T-cells grow up interpreting the new organ as self, and eventually the old T-cells die.

    However, there is one important exception to the rule that the immune system does not attack healthy cells of the body of which it is a part: one antibody can indeed attack another. They never learn not to do this. If the paratope of one antibody happens to match the epitope of another, then the former will latch onto the latter just as if it were an extraneous antigen. Obviously, this leads to extremely complicated dynamics.

    Obviously, this kind of inter-immune attacking could lead to undesirable results. One can imagine runaway cycles, immunological arms races: A attacks B, so B attacks A, so A attacks B, so B attacks A.... Luckily, however, the immunesystem seems to contains so many interconnected chains and cycles, that any runaway cycles are restrained by antibody types lying outside the cycle.

    An interesting implication of this "immune network" is that, even in the absence of external antigen, it is not necessary that the overall immune system settle into an equilibrium state, in which nothing attacks anything else with sufficient success to be cloned. All that is required is that the system is balanced in such a way that runaway chains and runaway cycles are rapidly restricted.

    ***

    We are all familiar with the phenomenon of vaccination. Underlying the action of vaccines is the ability of the immune system to maintain an antibody type corresponding to an antigen it has encountered in the past. Remarkably, however, immunologists do not yet understand the means by which this occurs, even when the time since the encounter with the antigen has been far longer than the life of a lymphocyte. Somehow, the immune system knows to generate a certain number of new memory cells of each type every now and then.

    One theory as to how this might happen is provided by the concept of immune learning, combined with the assumption of a highly intricate and active immune network. Consider the situation in which two antibody types, A and B, mutually recognize each other. And suppose that an antigen is suddenly introduced into the system. Assume A has decent success at recognizing epitopes of the antigen, and that it therefore clones and mutates, yielding a slightly different antibody type A which is significantly more successful with the antigen. Now, since A has proliferated, B has also cloned significantly. As long as none of the mutations of A have proliferated significantly, B has no need to mutate; we have assumed it does well at latching onto A. But when the population of A increases, B is presented with a large number of antibodies which it only comes close to matching perfectly; so it will begin to mutate, and with a little luck this mutation will yield a new strain, B, which is a near-perfect match for A.

    So, if we accept that the "matching" of A and B is symmetric, i.e. that A is excellent at latching onto B as well as vice versa, then it is clear that what has occurred is the creation within the immune system of a model of the antigen. The simple process of immune learning yields a clear and elegant explanation of the process of immune memory: the immune system remembers an antigen because it creates a model of the antigen and periodically practices latching onto the model. This explanation is clearly not limited to cycles consisting of two antibody types; the same logic applies to a cycle of any size.

    ***

    Some immunologists doubt the existence or importance of this immune network, this dynamical network of interattacking antibody types. They maintain that the immune system is a collection of antibodies independently evolving by clonal selection _ natural selection. However, others believe that it may be precisely the self-referential nature of the immune system _ the dependence of the evolution of each antibody upon the evolution of others _ which accounts for its impressive feats of antigen recognition and memory.

    The results of recent computer simulations support a moderate view. Over the last decade, Alan Perelson (of Los Alamos National Laboratory), Rob de Boer (of the Institute for Bioinformatics in Utrecht), and others have created a number of intriguing computer simulations of the immune system. From the results of these simulations, they have come to the conclusion that "the immune system combines a highly connected network with a functionally disconnected clonal system."

    The details of the simulation models vary, but the results are qualitatively similar across a variety of different models. When a large group of antibodies are permitted to interact freely according to the known laws of immunology, they tend to join together into one large connected structure, a clump of antibodies all structurally related to each other, and all recognizing patterns in each other. However, this structure is not a fixed collection of antibodies, it "moves around in shape space," leaving some antibodies behind and picking up others. The entire network is "one large frozen component which enables some isolated clones to remain functionally disconnected from the network."

    This frozen component is, in dynamical systems language, a subtle and sophisticated kind of attractor. It is not a fixed-point attractor, nor a periodic attractor, but rather a kind of strange attractor. It does not repeat itself, nor does it remain the same; it wanders around from one form to another keeping the same basic shape. It is a strange attractor borne of adaptation: of the adaptation of each antibody type to the other antibody types around it. It is also an autopoietic system: after all, it is a result of the different antibody types actively causing one another to be produced. None of the antibody types in the network would necessarily be produced, in the absence of external antigenic stimulation, without the impetus of the other antibody tupes in the network.

    Finally, this frozen component exists by virtue of a phase transition. If the antibody types were not good enough at recognizing patterns, of if their number were too few, then the emergent structure would not arise. But once the system passes the critical size, given the abilities of its components, then the structure reliably emerges.

    All the crucial features of complex systems are therefore demonstrated in computational models of the immune system. These models may not be entirely accurate, however, the odds arethat they underestimate rather than overestimate the subtle complexity and intelligence of the immune system.

    ***

    Scientists have not yet been able to fathom the full complexity of the immune system. The role of the network and the nature of immune memory still largely open questions; the functions of T-cells are largely unknown, etc. Prodded on by the AIDS epidemic, funding for immunological research is at an all-time high. But AIDS, as a disease affecting T-cells, lies just beyond the realm of current immunological theory. Intriguingly, the HIV retrovirus appears to pose a kind of insoluble logical puzzle for the immune intelligence. It acts as a kind of "counterintelligence agent" -- that, by borrowing the DNA of antibody cells, manages to constantly mutate its form so as to avoid recognition.

    Despite the incompleteness of current knowledge, however, we may still draw some general conclusions from our review of immune intelligence. First of all, immune intelligence provides striking proof of the power of trial and error. The immune system does a very good job of pattern recognition despite the tumultuous, often random-looking nature of its dynamics, and it seems that a great deal of this effectiveness can be explained through the sheer strength of numbers. If it makes enough guesses, eventually it is bound to come close to the right answer.

    The general implications of immune memory are even more interesting. The relation between immune memory and brain memory is intriguing. Today, no one still believes that the brain stores each datum in its own little compartment. It is generally understood that, somehow, most of the information in the brain is stored holistically, is spread across a wide region. But little else about long-term memory is agreed upon. Above all, the relation of long-term memory to thought is not well-understood. Is long-term memory merely a repository from which thought draws information? Or is it an integral part of the thought process? Consideration of the immune system suggests the latter. In the immune system, memory seems to works partly through the continual interaction of pattern-recognizing agents. Memory is, partially, a product of turning the pattern-recognition process on itself. This makes it somewhat plausible that neural memory is a product of various "traces" continually acting on each other; an hypothesis which also helps to account for the creative/destructive unreliability of memory.

    In sum, we can say that, if the brain is anything like the immune system, its capabilities for pattern recognition and long-term memory should emerge in a simple, natural way from the interactions of simple learning agents. In order to remember something, a biological system does not "write" that thing into some particular location, like a contemporary digical computer,but rather automatically keeps that thing alive by having it continually interact with other entities also "in memory."

    ***

    The essence of biology, of life, of pranamaya is thus revealed as self-organization and self-production -- attractors, adaptation and autopoiesis. The way a biological system achieves intelligence and complexity is by producing its own components, recognizing patterns in itself and its environment, and continually, creatively remaking itself. This is true in the immune system. It is true, as we shall see, in the brain. And it is doubtless true in other body systems as well -- as the science of the future will reveal.

    Non-Western cultures have long understood the integral, holistic nature of living systems. This understanding underlies non-Western systems of science, exercise, nutrition and medicine. With our scientific techniques, however, we have managed to penetrate far more deeply into the mechanisms which undergird the holism of living systems. Rather than merely recognizing the intelligence and subtle responsiveness of the body's resistance to disease, we can identify the cells making up the immune system, and understand the self-organizing pattern recognition processes from which this intelligence and subtle responsiveness emerges. "Complexity" is our word for how holistic processes emerge from underlying mechanisms -- it is our bridge from annamaya to pranamaya.

     The Self-Organizing Brain

    The brain is larger, more complex and more intelligent than the immune system, but it seems to display many of the same basic properties. Both are living systems, partaking of the logic of pranamaya, the logic of complexity. Both work by building up the components provides by physical reality into complex, self-organizing structures -- into emergent patterns that achieve some degree of independence from their physical substrates, and take on "a life of their own."

    ***

    Again, before getting to the interesting part, a few facts must be recounted. Let us begin with the nerve cells in the brain, which are called "neurons." Neurons are not the only brain cells; in fact, they are greatly outnumbered by glia, smaller cells that fill in the spaces between neurons. However, nearly all contemporary neuroscientists believe that the key to mental process lies in the large-scale behavior of networks of neurons.

    A neuron consists of a cell body with a long, narrow axonemerging from one end, and a large number of branches called dendrites snaking out in all directions. The dendrites are inputs _ they receive electrical signals from other neurons. The cell body periodically generates _ "fires" _ a new electrical impulse based on these input signals. After it fires, it needs to "recover" for a while before it can fire again; and during this period of recovery it basically ignores its input.

    The axon carries the electrical impulse from the cell body to the dendrites and cell bodies of other neurons. The points at which signals pass from one neuron to another are called synapses, and they come in two different forms _ excitatory and inhibitory. When an impulse arrives through an excitatory synapse, it encourages the receiving neuron to fire. When an impulse arrives through an inhibitory synapse, it discourages the receiving neuron from firing.

    Each synapse has a certain conductance or "weight" which affects the intensity of the signals passing through it. For example, suppose excitatory synapse A has a larger "weight" than excitatory synapse B, and the same signal passes through both synapses. The signal will be more intense at the end of A than at the end of B.

    Roughly speaking, a recovered neuron fires if, within the recent past, it has received enough excitatory input and not too much inhibitory input. How much excitation is "enough," and how much inhibition is "too much," depends upon the threshold of the neuron. If the threshold is minimal, the neuron will always fire when its recovery period is over. If the threshold is very high, the neuron will only fire when nearly all of its excitatory synapses and virtually none of its inhibitory synapses are active.

    A simple "neural network model," then, is a network of interconnected neurons, firing charge into each other. This is the standard complexity-science model of the brain, and it is a useful model, in spite of its shortcomings. The model can be made more realistic by incorporating some of the chemistry that mediates actual neural interaction. In the brain the passage of an electrical signal from one neuron to another is not exactly analogous to the passage of electricity across a wire. This is because most neurons that are "connected" do not actually touch. What usually happens when a signal passes from neuron A to neuron B is that the dendrites of neuron A build up a charge which causes certain chemicals called neuro-transmitters to carry that charge to the dendrites of neuron B. Most drugs work by interfering with the normal activities of neurotransmitters. Psychedelic drugs seem to work by impersonating neurotransmitters, and thus transmitting charge where, normally, no charge would be transmitted.

    ***

    What do networks of neurons have to do with thoughts, memories and feelings? The most promising answer to thisquestion is, in my view, given by the cell assembly theory. This theory, tracing back at least to the work of Donald Hebb in the late 1940's, states that mental entities are neural circuits.

    This idea brings us back to the immune network. In the preceding paragraphs, immune learning was portrayed as a consequence of trial and error, and immune memory was portrayed as a consequence of feedback loops in the immune network -- i.e. of self-organization and self-production. Similarly, in the brain, one may think of the vast number of neuronal circuits as being a pool of "patterns" each one of which potentially matches stimuli from the outside, or from other parts of the brain. And one may think of memories as consisting of reverberating neural circuits, formed from smaller circuits linked together. This is cell assembly theory.

    Hebb's idea, in particular, was that reverberating circuits are eventually reified in the structure of the brain's neural networks. He proposed a mechanism by which this might happen: very simply, when the connection between neuron A and neuron B is used very frequently, then the synapse between A and B becomes more conductive, so that charge can pass more easily from A to B. By this mechanism, he proposed, the "trial and error" soup of neuronal connections produces definite pathways, definite circuits and structures representing ideas, feelings and memories.

    Hebb's visionary ideas have been, to a great extent, validated by recent neurophysiological research. Scientists have observed "long-term potentiation," by which connections which are repeatedly used will, in the future, be more conducive to the passage of charge. In some cases, repeated stimulation of a neuron will even cause new connections to "sprout" there.

    In dynamical systems terms, we may say that mental entities are attractors of neural networks. Neural networks iterate over time, changing their state again and again, until they settle into some kind of characteristic behavior. This characteristic, roughly stable behavior is a thought, feeling, memory. In the simplest neural network models, we find only fixed-point attractors. But real neural networks seem to be dominated by periodic and strange attractors. For instance, UC Berkeley neuroscientist Walter Freeman has worked out a very detailed model of the rabbit's olfactory bulb. He finds that, in the case of rabbit olfaction, each smell that the rabbit can recognize is represented by a certain periodic attractor. On the other hand, the process of searching through memory to find out what a smell is, is represented by neural chaos. In a slogan: Chaos for search and learning, periodicity for knowledge. David Alexander, an Australian psychologist, has worked out a complex and detailed brain theory based on Freeman's ideas.

    ***

    An interesting variant on the cell assembly theory is GeraldEdelman's theory of neuronal group selection, or "Neural Darwinism." Edelman is a Nobel Prize-winning immunologist who, late in his career, struck by the close relationship between immune intelligence and brain intelligence, decided to turn his formidable knowledge and powers to neuroscience.

    The starting point of Neural Darwinism is the observation that neuronal dynamics may be analyzed in terms of the behavior of neuronal groups. The strongest evidence in favor of this conjecture is physiological: many of the neurons of the neocortex are organized in clusters, each one containing say 10,000 to 50,000 neurons each.

    Once one has committed oneself to looking at groups, the next step is to ask how these groups are organized. A map, in Edelman's terminology, is a connected set of groups with the property that when one of the inter-group connections in the map is active, others will often tend to be active as well. Maps are not fixed over the life of an organism. They may be formed and destroyed in a very simple way: the connection between two neuronal groups may be "strengthened" by increasing the weights of the neurons connecting the one group with the other, and "weakened" by decreasing the weights of the neurons connecting the two groups.

    This is the set-up, the context in which Edelman's theory works. The meat of the theory is the following hypothesis: the large-scale dynamics of the brain is dominated by the natural selection of maps. This is the "Darwinism" part, and the part that links the immune system to the brain. Both the immune system and the brain seem to instantiate a logic of trial and error adaptation which brings to mind Darwinian evolution.

    Those maps which are active when good results are obtained are strengthened, those maps which are active when bad results are obtained are weakened. And maps are continually mutated by the natural chaos of neural dynamics, thus providing new fodder for the selection process. By use of computer simulations, Edelman and his colleage Reeke have shown that formal neural networks obeying this rule can carry out fairly complicated acts of perception.

    This is essentially the same idea as Hebb's cell assembly theory, but it is phrased in new terminology, and is developed in a more biologically sophisticated way. Edelman's stress is on the evolutionary nature of brain dynamics, whereas Hebb's was on the formation and strengthening of cell assemblies. In his book Neural Darwinism, Edelman presents neuronal group selection as a collection of precise biological hypotheses, and presents evidence in favor of a number of these hypotheses.

    To put it a little differently, while the cell assembly theory speaks in terms of attractors, Edelman draws in adaptation. Not only are mental entities attractors of neuronal dynamical systems, but they are attractors that must continually adapt to one another, each one matching up with its neighboring attractors in an harmonious way. The immune network, in which antibody types must adapt to one another, is mirrored by thenetwork of neuronal groups and neuronal maps.

    And, with a little thought, it is not hard to see how autopoiesis comes into the picture too. Quite simply, each neural assembly may be thought of as creating other neural assemblies. Consider, for example, the neural assembly corresponding to the word "Up." Now, this assembly does not physically create the cells which make up other assemblies, of course. But it does direct charge through other assemblies, thus selectively reinforcing the connections between their neurons. Thus, although our assembly for "Up" cannot actually cause new cells to form, it can create new assemblies in two ways. First, it can modify other assemblies, changing the strengths of their internal interconnections, causing them to leave out some of their previous member cells and incorporate new member cells. Second, it can cause the formation of a new assembly out of neurons that were not previously grouped together in an assembly.     A self-supporting system, an attractor for a neural-assembly autopoietic system, is then a family of neural assemblies which mutually reinforce each other. Intuitively, it would seem clear that this is exactly the sort of family of neural assemblies that should tend to have long-term staying power amidst the chaotic self-organization of the brain. The brain emerges as a spatial network of mutually inter-adapting attractors, involved in a multitude of overlapping autopoietic subsystems.

    ***

    Finally, it should be emphasized that, as much as we have found out about the brain in recent years, there is immensely more that we do not know. For instance, while Edelman's theory is very impressive I am a little bit skeptical that the simple process of mutation and selection is sufficient to generate, over a reasonable period of time, tremendously complex structures such as those found in the human mind. It seems likely that some more powerful generative mechanism is required. In The Evolving MindI have put forth the hypothesis that neural maps may perhaps reproduce by crossover as well as mutation -- i.e., that, by some mechanism, parts of neuronal maps may combine with parts of other neuronal maps to form new neuronal maps. This is a highly radical proposition, and one which the neurophysiologists will ultimately have to decide.

    In order to do carry out crossover on the level of neuronal groups, a brain would have to have the capacity to create new neuronal maps out of nothing. This presupposes that a the brain has a tremendous ability to re-arrange itself. However, there is evidence that the human brain is indeed this flexible. For example, there is the case of the Italian woman who, before she was trained to be an Italian-English translator, held both English and Italian primarily in her left brain. After her translation training, her knowledge of English moved over to the right. The brain is evidently capable of moving whole networksof maps from one place to another! This is an aspect of brain function which current brain theories can barely fathom!

     Self-Organizing Evolution

    We have talked a lot about adaptation and evolution -- in the abstract, and in the context of brains and immune systems. It is now time to go direct to the source, and talk about evolution in the system in which it was first discovered -- the ecosystem.

    Just as Western science has tended to ignore the wisdom of the body, so it has tended to ignore the holistic and integrated nature of the biological environment. Even today, most policymakers seem to assume that the environment can be understood in a localized, simplistic way. They do not understand, or pretend not to understand, the subtle processes by which the extinction of a species, or the depositing of toxic waste, can affect the entire ecological dynamics of a large region.

    Ecologists have long understood the subtle webs of interaction that make up ecosystems. What they have not generally grasped, however, is the deep connection between ecology and evolution. As it turns out, ecology and evolution are richly interdependent in ecosystem dynamics, just as they are in brain dynamics and immune system dynamics. One cannot talk about the evolution of species without talking about the web of pattern binding species to their environment. In this brief section on evolution, I will focus on this relationship, the relation between evolution and ecology.

    As we review these new ideas, however, it should not be forgotten that the recognition of subtle biological interdependence is new only to science. To tribal cultures, living at one with nature, the philosophy of this "new biology" was nothing more than common sense. Of course change and self-preservation are bound up together in real ecosystems. Of course nothing can be understood in isolation -- all the parts of the ecosystem form a richly interconnected whole. Of course pranamaya, the world of life, extends outside the individual body and through the network of all living things. What could be more simple and obvious?

    ***

    Let us begin by clarifying the concept of natural selection.

    Artificial selection is what breeders do. They know that different plants and animals of the same species have widely varying characteristics, and they also know that a parent tends to be similar to its offspring. So they pick out the largest from each generation of tomatoes, or the fastest from eachgeneration of horses, and ensure that these superior specimens reproduce as much as possible. The result is a consistently larger tomato, or a consistently faster horse.

    With artificial selection, however, the selection is merely a tool of the breeder, who sets the goal. If one wishes to use artificial selection as a model for natural evolution, one must answer the question: what is the goal and who sets it? The simple, brilliant answer of Charles Darwin and Alfred Russell Wallace was: the environment sets the goal, which is survival.

    Herbert Spencer's phrase "survival of the fittest" captures one interpretation admirably well: the idea of the struggle for existence, according to which various organisms battle it out for limited resources, and the best of the bunch survive. This interpretation of natural selection combined with Mendelian genetics, forms the theory which I call strict Darwinism _ an evolutionary research programme which essentially dominated biology throughout the middle of this century. As observed in the Introduction, most of the recent attempts to model the mind or brain in evolutionary terms implicitly accept strict Darwinism as fact.

    Essentially, strict Darwinism views an organism as a bundle of "traits." Random genetic variation causes traits to change, little by little, from one generation to the next. When a trait changes into a better trait, the organism possessing the improved trait will tend to be more successful in the struggle for existence, as compared to those organisms still possessing the old trait. Thus the better trait will tend to become more and more common. This point of view encourages the idea that every aspect of every organism is in some sense "optimally" constructed.

    More and more biologists, however, are coming to believe that strict Darwinism is a misinterpretation of the facts. Stephen Jay Gould has been among the most ardent advocates of the alternative, complexity-oriented point of view. The view which Gould urges admits that it is impossible to decompose an organism into a vector of traits: that the qualities which characterize an organism are deeply interconnected, so that small changes induced by random genetic variation will often display themselves throughout the whole organism. And it admits that "better" is not definable independently of evolutionary dynamics: the environment of an evolving organism consists partly of other evolving organisms, so that the organisms in an ecosystem may be "optimal" only in the weak sense of "fitting in relatively well with one another." In short, this view interpret natural selection in terms of self-organization on the intra-organismic and inter-organismic levels.

    I believe that Gould's self-organization-oriented view is correct, and that, as more and more data comes in from molecular biology, natural history and computer simulation, this is going to become the consensus view. Strict Darwinism attempts to reduce pranamaya to annamaya -- attempts to explain the whole in terms of the parts, and to reduce the fertile complexities oflife to a simple, mechanistic dynamic. This kind of reduction can be fruitful, to a limited extent, but it can never tell the whole story.

    ***

    In their book The New Biology, the two scientists Augros and Stanciu summarize the strict Darwinist point of view on evolution in three simple points.

    First, the principle of geometrical increase: that populations increase exponentially, so that if reproduction were unchecked, the earth would rapidly become overfull with organisms. This observation was first made by Malthus, and indeed, reading Malthus was a crucial part of Darwin's discovery of natural selection.

    Second, the principle of competition: that, because there isn't room for everyone, organisms must compete for space, for food, and for scarce resources generally.

    And third, the principle of variation and selection, which states that slight variations will accumulate over time and yield large changes -- new species.

    In the body of their book, Augros and Stanciu present detailed arguments against these three points, gleaned from a careful study of the biology research literature. Of course, the original view of Charles Darwin himself was much more complex than this epitome indicates. However, these three points represent the epitome of Darwin's thought as it influenced evolutionary biologists for most of this century.

    First, Augros and Stanciu cite a study of over three thousand elephants in Kenya and Tanzania, showing that "the age of sexual maturity in elephants was very plastic and was deferred in unfavorable situations.... Individual animals were reaching maturity at from 8 to 30 years." They cite similar studies for "white-tailed deer, elk, bison, moose, bighorn sheep, Dall's sheep, ibex, wildebeest, Himalayan tahr, hippopotamus, lion, grizzly bear, dugong, harp seals, southern elephant seals, spotted porpoise, striped dolphin, blue whale, and sperm whale... rats, mice and voles... some species of birds." Also, they note that, in many species, the number of offspring is a function of the amount of food available.

    Against the idea of the "struggle for existence," they point to the well-known organization of ecosystems into "niches." Evolutionary biologist Niles Eldredge has spoken of the many "ecologists skeptical of the very concept of competition between species ... who claim they simply cannot see any evidence for such raw battling going on nowadays in nature." Predation is a highly significant source of interspecies violence; and intraspecies combat is also not infrequent, as in disputes over territory. But competition between different species is not common at all. In general, it seems that different speciesgenerally live in slightly different places, and/or eat slightly different foods. One striking example of this is provided by MacArthur's classic study of the warbler. MacArthur demonstrated that five species of warbler, similar in size and shape, feeding on bud worms in the same spruce trees, avoid competition by consistently inhabiting different regions of the trees. One species tends to remain near the tops of the trees, another toward the outside middle, etc.

    The examples go on and on. A lion and a cheetah rarely fight _ the cheetah, with its superior speed, simply runs away. A large bird and a small bird, whether of the same or different species, rarely fight over the same piece of food. The small bird gives up quickly, instinctively recognizing a severe threat to its life.

    Competition does exist, but it is far from prevalent. Cooperation is at least as common. Moss and algae combine to form lichen. Or, more esoterically, the Calvaria tree of Mauritius island "has not been able to germinate for over three hundred years... since the extinction of the dodo.... The dodo, in eating the Calvaria fruit, ground and abraded the hard shell of the pit in its powerful gizzard, such that when excreted, the seed was able to penetrate the shell and grow. Without the dodo's help the Calvaria seed could not break through its own shell." And locust trees, like other legumes, carry nitrogen-fixing bacteria on their roots which, even in poor soil, boost the average height of surrounding cedar trees from thirty inches to seven feet.

    Finally, Augros and Stanciu point out that there is absolutely no evidence in favor of the hypothesis that small variations in particular traits can accrete to cause the creation of new species. We have seen the emergence of new breeds within the same species, but we have never witnessed the birth of a new species, so there is no way to directly test any theory of species creation. And what is known of the genetics of mutation is of little help: it is sketchy and full of mysteries. For instance, there are huge sections of DNA for which no particular purpose has been identified.

    This point is related to Gould and Eldredge's theory of "punctuated equilibrium" _ the hypothesis, strongly supported by the existence of huge gaps in the fossil record, that species creation is not gradual but rather occurs in relatively sudden jumps (perhaps as little as ten thousand years). Although Darwin himself was an avowed gradualist, the phenomenon of punctuated equilibrium does not explicitly contradict Darwin's theory of natural selection. But it does pose the theory a rather difficult question: exactly how do small variations set off large changes?

    All in all, the picture that emerges is one of an ecosystem as an autopoietic system. The different parts all mesh together and support each other in a way that strict Darwinism does not recognize. Each part gives in response to the other parts, rather than constantly struggling with other parts in some kindof endless bloody competition.

    Strict Darwinism acknowledges one side of the basic duality: the force of novelty, of innovation, as represented by random variation. But it does not even do justice to novelty, because it treats traits as separate, and ignores the complexity of organisms, by which traits combine to form new traits. And it ignores the opposing force of conservation, autopoiesis, almost entirely. In sum, strict Darwinism is a flawed view, both philosophically and scientifically.

    ***

    To gain a deeper understanding, let us look back at the immune system. The immune system, like the ecosystem, is an evolving system. What are the commonalities between the two of them?

    An immune system has a definite goal: to recognize antigen, grab onto it and destroy it. There is natural selection, in the sense that an antibody which is of no use at attacking antigen is less likely to survive than one which is useful for this purpose. But this simple selectionist dynamic is modified and sometimes stifled by the existence of self-organization, by the existence of the immune network. The network is the immune system's ecology: it is the autopoietic context in which the evolutionary dynamic of clonal selection must be understood.

    Competition does not necessarily play a major role in immune evolution, because different antibody classes tend to have different "niches," different places in the network. Predation of a sort may be common (antibody types wiping out other antibody types), but this is different from competition. Cooperation is prevalent, in that each element of the network directly or indirectly supports other elements of the network. In this way an antibody class which is useless for recognizing common antigens may be preserved, simply because it is useful for recognizing other antibodies.

    This picture of the immune system is a nice counterpart to the work of Augros and Stanciu. It suggests that in the ecosystem, as in the immune system, natural selection must be understood in the context of self-organization. The "fittest" organism, or antibody, is not the one which beats out all its competitors in some sort of contest, but rather the one which best makes a place for itself in the complex network of relations which its peers, together with the physical environment, create. There is a sort of contest to adapt most effectively, but it is a contest in which the playing field, and the rules of the game, vary a great deal from competitor to competitor (and are indeed created by a complex dynamic involving all of the competitors).

    In short, the whole ecosystem is alive, not just the individual organisms. Just as the whole immune system has its own life and intelligence, apart from the individual entitiesthat make it up. Ecosystems and immune systems, like brains, organisms, societies and other biological systems, have their own level or living order -- an order which manifests itself in terms of strange attractors, formeed through a balance of autopoiesis and adaptation.

    ***

    Finally, the concept of "fitness" perhaps deserves a more careful analysis. Everyone has heard the argument: Evolution by natural selection is a tautology, for it postulates the survival of the fittest, but how do we know who the fittest is except by seeing who survives? Herbert Spencer, one of the first to make this "observation," saw it as proof of the universal philosophical truth of evolution by natural selection.

    But, of course, Darwin never considered natural selection to be tautologous. "Survival of the fittest" was Spencer's phrase, and opinions differ as to what Darwin meant when he adopted it. My own inclination is to interpret "fitness" in the sense of "most closely fit to the environment" _ using the word "fit" as in the sentence "My coat fits well." Under this interpretation there is no tautology _ only a problem, the problem of defining what it means to be closely fit to the environment.

    The concept of fitness has always been rather slippery. In the strict Darwinist literature, one often reads of the "fitness value" of a specific trait, but no general definition of fitness is ever given. Very few indications are ever given as to how one might determine the fitness value of a specific trait in the context of life in an actual ecosystem. In this way strict Darwinism leaves itself wide open to accusations of tautology.

    Let us turn again to the immune system for guidance. In the network model of the immune system, how could one possibly tell how "fit" an antibody class was, without inserting it in the system to see what happens? One could never tell for certain. However, there would be certain indicators. For one thing, an antibody class able to integrate itself with the interconnected network of antibody classes, would be much more likely to survive than a randomly selected antibody classes. It is true that an antibody class might die out while interacting with the network _ but an antibody class which had no connection with either antigen or other antibody classes would be essentially certain to die out, and fast. So if the structure of the antibodies in a class match the structure of antibodies already in the system, that class will be far more viable.

    In the case of the immune system, then, fitness just means "fits in with the environment," in the sense of generates a lot of emergent pattern in conjunction with the environment. I suggest that this is what fitness always means, in ecosystems, brains, or wherever. An organism is fit, roughly speaking, if it generates a lot of pattern in conjunction with itsenvironment. Because it is pattern that makes pranamaya go. This is, in the end, Gregory Bateson's Metapattern: that, in living systems, it is pattern which connects.

    This new, complexity-and-pattern-oriented concept of fitness has immense ramifications for all aspects of life. In terms of contemporary Western culture, it might be viewed as a feminine concept of fitness. Instead of stressing competition on some particular trait, it stresses finding a suitable, productive place in the environment -- not "fitting in" in the sense of conforming to what everyone else is doing, but "fitting in" in the sense of finding or making oneself an appropriate niche.

    ***

    Having looked briefly at immune systems, brains and ecosystems, the common patterns are not difficult to limn. Each of these systems creates new forms by some kind of natural selection -- i.e., by trial and error and differential reproduction. Each of these systems maintains its forms by autopoiesis -- reverberating neural circuits and maps, immune networks and frozen components, ecological webs. And each of these systems displays complex dynamics that, while clearly patterned and not totally random, are neither stable nor oscilattory -- i.e., all are pushed by their twin dynamics of adaptation and autopoiesis into certain strange attractors.

    These concept of complexity science, as I have repeatedly stressed, are not just technical ideas. They are an example of science arriving at its own language for talking about ancient, intuitive truths. Attractors, adaptation and autopoiesis are science's way of saying that the world is alive. They are an elaboration of Bateson's notion that "it is pattern that connects." They are the bridge that science has built between annamaya and pranamaya. And, as we shall see in the following chapter, they are surprisingly useful for probing the level of Mind, manomaya, as well.

     POSTSCRIPT: COMPLEXITY, QUANTA AND MIND

    Finally, before moving on to psychological systems, it is worth pausing to point out certain obvious parallels between complexity science and quantum physics. Both speak of holism. And both, by speaking of holism, promise to transform our view of the physical world.

    Quantum physics teaches that the world is a unified whole: distant particles are bound together by nonlocal correlation, and the observed is bound to the observer. Complexity science, on the ohter hand, teaches that the interesting properties of real-world systems are holistic ones. Quantum theory has further implications, regarding the limited validity of linear time and the interdefinition of basic physical entities. Complexityscience also has further implications, regarding the existence of subtle connecting patterns that are difficult to boil down to the physical level, and that transcend the typical hierarchical pattern of science.

    Neither of these scientific advances entirely captures the nature of the higher levels of being. However, they are both quite reminiscent of these fundamental spiritual planes of reality. Science is coming closer and closer to a world-view harmonious with spiritual experience. In the next chapter, we will use this convergence of science and spirit to come to grips with the nature of mind.

    Mind is trapped between the two extremes of Atman and annamaya. In the most general sense, "mind" might be taken to refer to the entire hierarchy. But the core meaning of "mind" lies in the middle levels of the hierarchy: pranamaya, the feelings; manomaya, the thinking mind; and vijnanamaya, the higher intuitive inspiration. In this sense, we may say that spirituality touches mind from the top down -- spiritual seers take their insights from the higher levels and apply them to understand mental process. On the other hand, science touches mind from the bottom up -- scientists take insights from the physical world and build them up into models of mental processes. In science and spirituality, we have what might be called bottom-up and a top-down views of the mind. Mind is illuminated from both directions: by a light from above, and a light from below.

    This is a unique time in history, in that now, for the first time ever, bottom-up knowledge and top-down knowledge of the mind are converging. As I will show in the following chapters, the careful and creative study of the mind in terms of complexity science leads to conclusions much like those obtained from the careful and creative exploration of the higher realms of being. And what is the message imparted to us by this correspondence? It is, I submit, a very simple one: namely, that all the levels of being are really one. The five sheaths of Atman are in reality just part of Atman.

    The realization that all the levels of reality are one is, of course, part of the Perennial Philosophy. To experience all the levels as one, vividly and directly, is to be enlightened. From this perspective, it is highly encouraging to see that the natural development of science is leading us towards a realization of the unity of the different levels. Science, which developed in a state of war against religion, is now prodding us on to deep spiritual realizations. And this historical fact, in itself, feels to me like a part of the anandamaya, the sheath of Bliss. It is a supremely ironic correspondence which is almost too subtle to put into words or ideas.