The Evolving Mind -- Copyright Gordon and Breach © 1993

Back to The Evolving Mind Table of Contents

CHAPTER 3

SELF-ORGANIZING EVOLUTION

Artificial selection is what breeders do. They know that different plants and animals of the same species have widely varying characteristics, and they also know that a parent tends to be similar to its offspring. So they pick out the largest from each generation of tomatoes, or the fastest from each generation of horses, and ensure that these superior specimens reproduce as much as possible. The result is a consistently larger tomato, or a consistently faster horse.

    With artificial selection, however, the selection is merely a tool of the breeder, who sets the goal. If one wishes to use artificial selection as a model for natural evolution, one must answer the question: what is the goal and who sets it? The simple, brilliant answer of Darwin and Wallace was: the environment sets the goal, which is survival. Darwin summarized his theory of natural selection as follows:

        If under changing conditions of life organic beings present individual differences in almost every part of their structure, and this cannot be disputed; if there be, owing to their geometrical rate of increase, a severe struggle for life at some age, season or year, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of life, causing an infinite diversity in structure, constitution and habits, to be advantageous to them, it would be a most extraordinary fact if no variations had ever occurred useful to each being's own welfare, as so many variations have occurred useful to man. But if variations useful to any organic being ever do occur, assuredly individuals thus characterized will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance, they will tend to produce offspring similarly characterized. This principle of preservation, or the survival of the fittest, I have called

    Natural Selection.

    Despite the wealth of highly specific data with which he supported it, Darwin's concept of natural selection was a broad one. Subsequent evolutionists have interpreted this concept in many different ways. And the differences between these interpretations are of more than philosophical importance: in many cases they lead to drastically different explanations of particular biological structures and events.

    Herbert Spencer's phrase "survival of the fittest" captures one interpretation admirably well: the idea of the struggle for existence, according to which various organisms battle it out for limited resources, and the best of the bunch survive. This interpretation of natural selection combined with Mendelian genetics, forms the theory which I call strict Darwinism _ an evolutionary research programme which essentially dominated biology throughout the middle of this century. As observed in the Introduction, most of the recent attempts to model the mind or brain in evolutionary terms implicitly accept strict Darwinism as fact.

    As also noted in the Introduction, I am not alone in rejecting the strict Darwinism. My goal in this chapter will be to sketch an alternative interpretation of evolution _ one which fits within the general structure of Darwinian theory, but differs from the strict Darwinism on many points of philosophical and practical importance.

    What exactly is the problem with strict Darwinism? The case against this theory is best made with specific examples, rather than abstract pontification. However, we must begin with some general statements, so as to give a framework for interpreting the examples to be discussed below. Toward this end, rather than merely giving you my own opinion, let me hand over the microphone to Stephen Jay Gould - an accomplished geologist, biologist and historian who has spent much of his career corralling examples showing exactly how misleading strict Darwinism is.

        This strict version [of Darwinism] went well beyond a simple assertion that natural selection is a predominant mechanism of evolution.... it emphasized a program for research that almost dissolved the organism into an amalgam of parts, each made as perfect as possible by the slow but relentless force of natural selection. This "adaptationist program" downplayed the ancient truth that organisms are integrated entities with pathways of development constrained by inheritance - not pieces of putty that selective forces of environment can push in any adaptive direction. The strict version, with its emphasis oncopious, minute, random variation molded with excruciating but persistent slowness by natural selection, also implies that all events of large-scale evolution (macroevolution) were the gradual, accumulated product of innumerable steps, each a minute adaptation to changing conditions within a local population. This "extrapolationist" theory denied any independence to macroevolution and interpreted all large-scale evolutionary events (origin of basic designs, long-term trends, patterns of extinction and faunal turnover) as slowly accumulated microevolution (the study of small-scale changes within species). Finally, proponents of the strict version sought the source of all change in adaptive struggles among individual organisms, thus denying direct causal status to other level sin the rich hierarchy of nature with its "individuals" both below the rung of organisms (genes, for example) and above (species, for example). The strict version, in short, emphasized gradual, adaptive change produced by natural selection acting exclusively on organisms. (1983, p. 13)

    Essentially, strict Darwinism views an organism as a bundle of "traits." Random genetic variation causes traits to change, little by little, from one generation to the next. When a trait changes into a better trait, the organism possessing the improved trait will tend to be more successful in the struggle for existence, as compared to those organisms still possessing the old trait. Thus the better trait will tend to become more and more common. This point of view encourages the idea that every aspect of every organism is in some sense "optimally" constructed (Dupre, 1987).

    The view which Gould urges, on the other hand, admits that it is impossible to decompose an organism into a vector of traits: that the qualities which characterize an organism are deeply interconnected, so that small changes induced by random genetic variation will often display themselves throughout the whole organism. And it admits that "better" is not definable independently of evolutionary dynamics: the environment of an evolving organism consists partly of other evolving organisms, so that the organisms in an ecosystem may be "optimal" only in the weak sense of "fitting in relatively well with one another." These admissions form a large part of what I call the self-organizational theory of evolution _ they interpret natural selection in terms of self-organizationon the intra-organismic and inter-organismic levels.

    Unlike, say, evolutionism and creationism, or classical physics and relativistic physics, the primary relation between strict Darwinism and the self-organizational theory is not one of direct opposition. The difference between the two views is largely one of emphasis. Strict Darwinism does not deny the existence of self-organization within organisms, nor does it deny the existence of complex ecological structures. And the self-organizational theory does not deny that, in some cases, evolution works by reinforcing traits that are just plain better than the alternatives in a way that does not depend sensitively on the details of local environments. However, the two views differ dramatically in their estimates of the relative frequencies of situations in which different phenomena are important. Strict Darwinism estimates that self-organization is only infrequently relevant to evolutionary analysis, and that trait-by-trait optimization is very often important. The self-organizational theory, on the other hand, gauges the relevance of trait-by-trait optimization to be small, and the frequency of significant self-organizing phenomena to be large. This sort of disagreement _ one which focuses on relative frequency rather than absolute principle _ is characteristic of biology, in which there are always a few exceptions to every rule (including this one!).

    I have overstated the case a bit: in the pages to follow we shall encounter at least one important disagreement on absolute principle (regarding the relation between mutation and selection). But the point is that disagreements of this nature are, in themselves, only a small part of the story. The biggest disagreements regard relative frequency; that is, they have to do with what is to be considered the exception, and what the rule.     

3.1 THE CASE AGAINST STRICT DARWINISM

The case against strict Darwinism is a complex one. And this is fitting _ after all, strict Darwinism is a complex theory of very high quality. In their New Biology, however, Augros and Stanciu construct a very simple argument against the strict Darwinism. This argument has its disadvantages: in some ways it oversimplifies both strict Darwinism and the self-organizational alternative. But it is as good a starting point as any.

    Augros and Stanciu summarize Darwin's argument as follows:

        1) on the principle of geometrical increase more     individuals of each species are produced than can     possibly survive; 2) a struggle for existence ensues,     one organism competing with another, and 3) owing     to this struggle for existence, slight variations, if in     any degree advantageous will accumulate and     produce a new species.

They find fault with each of these statements.

    Of course, Darwin's view was much more complex than this epitome indicates. However, each of these three points was indeed essential to Darwin's thought; Augros and Stanciu are merely simplifying things, not setting up a straw man.

    Against the principle of geometrical increase, Augros and Stanciu cite a study of over three thousand elephants in Kenya and Tanzania, showing that "the age of sexual maturity in elephants was very plastic and was deferred in unfavorable situations.... Individual animals were reaching maturity at from 8 to 30 years." They cite similar studies for "white-tailed deer, elk, bison, moose, bighorn sheep, Dall's sheep, ibex, wildebeest, Himalayan tahr, hippopotamus, lion, grizzly bear, dugong, harp seals, southern elephant seals, spotted porpoise, striped dolphin, blue whale, and sperm whale... rats, mice and voles... some species of birds" (Augros and Stanciu, 1988, p.125). Also, they note that, in many species, the number of offspring is a function of the amount of food available.

    Against the idea of the "struggle for existence," they point to the well-known organization of ecosystems into "niches." Evolutionary biologist Niles Eldredge has spoken of the many "ecologists skeptical of the very concept of competition between species ... who claim they simply cannot see any evidence for such raw battling going on nowadays in nature" (1985, p.82). Predation is a highly significant source of interspecies violence; and intraspecies combat is also not infrequent, as in disputes over territory. But competition between different species is not common at all. In general, it seems that different species generally live in slightly different places, and/or eat slightly different foods.

    One striking example of this is provided by MacArthur's classic study of the warbler, shown in Figure 8 (MacArthur, 1958). MacArthur demonstrated that five species of warbler, similar in size and shape, feeding on bud worms in the same spruce trees, avoid competition byconsistently inhabiting different regions of the trees. One species tends to remain near the tops of the trees, another toward the outside middle, etc.

    The examples go on and on. A lion and a cheetah rarely fight _ the cheetah, with its superior speed, simply runs away. A large bird and a small bird, whether of the same or different species, rarely fight over the same piece of food. The small bird gives up quickly, instinctively recognizing a severe threat to its life.

    Competition does exist, but it is far from prevalent. Cooperation is at least as common. Moss and algae combine to form lichen. Or, more esoterically, the Calvaria tree of Mauritius island "has not been able to germinate for over three hundred years... since the extinction of the dodo.... The dodo, in eating the Calvaria fruit, ground and abraded the hard shell of the pit in its powerful gizzard, such that when excreted, the seed was able to penetrate the shell and grow. Without the dodo's help the Calvaria seed could not break through its own shell." And locust trees, like other legumes, carry nitrogen-fixing bacteria on their roots which, even in poor soil, boost the average height of surrounding cedar trees from thirty inches to seven feet (Augros and Stanciu, 1988, p.112).

    Finally, Augros and Stanciu point out that there is absolutely no evidence in favor of the hypothesis that small variations in particular traits can accrete to cause the creation of new species. We have seen the emergence of new breeds within the same species, but we have never witnessed the birth of a new species, so there is no way to directly test any theory of species creation. And what is known of the genetics of mutation is of little help: it is sketchy and full of mysteries. For instance, there are huge sections of DNA for which no particular purpose has been identified.

    This point is related to Gould and Eldredge's (1977) theory of "punctuated equilibrium" _ the hypothesis, strongly supported by the existence of huge gaps in the fossil record, that species creation is not gradual but rather occurs in relatively sudden jumps (perhaps as little as ten thousand years). Although Darwin himself was an avowed gradualist, the phenomenon of punctuated equilibrium does not explicitly contradict Darwin's theory of natural selection. But it does pose the theory a rather difficult question: exactly how do small variations set off large changes? Augros and Stanciu do not pursue this matter, but we shall return to it a few sections down the road.

    So, Augros and Stanciu have given us a nice and orderly case against strict Darwinism. Exponential population growth is not a universal rule; the complex system of niches means that different speciesare not continually struggling with one another; and there is no evidence that the continual accretion of small changes in individual traits can culminate in the creation of intricate new forms. Note that all of these points are matters of relative frequency. Augros and Stanciu are saying that exponential population growth, competitive struggle and accretion of tiny changes into huge ones are uncommon occurences. They are not saying that these things don't happen _ all but the latter are positively known to occur, on occasion. Conversely, the strict Darwinists never said that these things happened all the time _ only most of the time.

    Because these are basically arguments regarding relative frequency, one must be skeptical of long lists of examples (such as the ones given by Augros and Stanciu, and excerpted above). After all, even if someone can list fifty seven thousand instances of symbiosis, what does it mean? Maybe there are a million instances of violent competition. Even if we listed all examples known to biological science, that does not get around the fact that our theoretical presuppositions tell us what sort of examples to look for. Concrete examples are important, but it is just as important to seek a fundamental understanding of the structures and dynamics involved.

    Toward this end, let us look back at the immune system. An immune system has a definite goal: to recognize antigen, grab onto it and destroy it. In this system we do have natural selection, in that an antibody which is of no use at attacking antigen is less likely to survive than one which is useful for this purpose. But this simple dynamic is modified and sometimes stifled by the existence of self-organization, by the existence of the immune network.

    Competition does not necessarily play a major role in immune evolution, because different antibody classes tend to have different "niches," different places in the network. Predation of a sort is of course very common, but this is different from competition. Cooperation is prevalent, in that each element of the network directly or indirectlysupports other elements of the network. In this way an antibody class which is useless for recognizing common antigens may be preserved, simply because it is useful for recognizing other antibodies.

    This picture of the immune system is a nice counterpart to the work of Augros and Stanciu. It suggests that in the ecosystem, as in the immune system, natural selection must be understood in the context of self-organization. The "fittest" organism, or antibody, is not the one which beats out all its competitors in some sort of contest, but rather the one which best makes a place for itself in the complex network of relations which its peers, together with the physical environment, create. There is a sort of contest to adapt most effectively, but it is a contest in which the playing field, and the rules of the game, vary a great deal from competitor to competitor (and are indeed created by a complex dynamic involving all of the competitors).

3.2 FITNESS

Everyone has heard the argument: evolution by natural selection is a tautology, for it postulates the survival of the fittest, but how do we know who the fittest is except by seeing who survives? Herbert Spencer, one of the first to make this "observation," saw it as proof of the universal philosophical truth of evolution by natural selection.

    But, of course, Darwin never considered natural selection to be tautologous. "Survival of the fittest" was Spencer's phrase, and I am not sure what Darwin meant when he adopted it. My own inclination is to interpret "fitness" in the sense of "most closely fit to the environment" _ using the word "fit" as in the sentence "My coat fits well." Under this interpretation there is no tautology _ only a problem, the problem of defining what it means to be closely fit to the environment.

    The concept of fitness has always been rather slippery. In the strict Darwinist literature, one often reads of the "fitness value" of a specific trait, but no general definition of fitness is ever given. Very few indications are ever given as to how one might determine the fitness value of a specific trait in the context of life in an actual ecosystem. In this way strict Darwinism leaves itself wide open to accusations of tautology.

    Let us turn again to the immune system for guidance. In the network model of the immune system, how could one possibly tell how "fit" an antibody class was, without inserting it in the system to see what happens? One could never tell for certain. However, there would be certain indicators. For one thing, an antibody class able to integrate itselfwith the interconnected network of antibody classes, would be much more likely to survive than a randomly selected antibody classes. It is true that an antibody class might die out while interacting with the network _ but an antibody class which had no connection with either antigen or other antibody classes would be essentially certain to die out, and fast. So if the structure of the antibodies in a class match the structure of antibodies already in the system, that class will be far more viable.

    Let E denote the entire immune system, and let A denote an antibody class. Then, another way of putting the argument of the above paragraph is: one fairly accurate predictor of the survival of A is Em(A,E-A). If the structure of A matches the structure of other elements of the system, Em(A,E-A) will be nontrivial _ because there is an important emergent pattern in the set consisting of a lock and a key. From the key one may compute the lock with a great deal of accuracy, and vice versa.     In case this is not immediately apparent, let us consider a very crude model. Let us model an antibody as a sequence of bumps and indentations, each with a height between -5 and 5. Two antibodies A and B are then said to match if they can be lined up so that the bumps on A correspond to the indentations in B. For instance, the antibodies A="-1,2,3" and B="1,-2,-3" are perfect matches, because wherever A has a bump of height x, B has an indentation of depth x; and wherever A has an indentation of depth x, B has a bump of height x. Or, if

    A = " 1, 2, 3, 4, 2, 5, 3, 2, 1,-4,-4,-3,-2, 4,-3,-5, 1,-4"

    B = "-1,-2,-3,-4,-2,-4,-2, 0, 3, 3, 3, 3, 2,-4, 2,4,-2, 4"

the bumps on A mainly correspond to indentations in B, and vice versa (for sake of biological realism, I have introduced a few "errors," inexact matches). Now, neither A nor B has any particular regularities in its structure. But Em(A,B) may, namely the (approximate) pattern (y,z) where z=A and y is the program which creates, from the bump sequence "x1,x2,...,xn," the bump sequence "-x1,-x2,...,-xn." Whether this is actually a pattern depends on the technicalities of the computation of y _ but if A and B were, say, 100 or 1000 bumps long, it would definitely be a pattern under any reasonable model of computation. This is a highly simplified model, but the same result would hold in any formalization of the "lock-and-key" mechanism of immunological recognition.

    This rough-hewn immunological model hints at a more general rule: one key indicator of the fitness of an entity A in an environment E is Em(A,E-A) (where _ is to be interpreted in the set-theoretic sense,so that E-A is all those parts of E which are not also parts of A). Intuitively: if A and E "fit" together, one should be able to determine something of the nature of E from the nature of A, and/or vice versa.     Note that I have called Em(A,E-A) an "indicator" of fitness, rather than fitness itself. To call something an indicator of fitness is to imply that there is some notion of fitness which is independent of that indicator. So, by using the word "indicator" I am leaving the door open for some future characterization of fitness, one which is in some sense more fundamental than Em(A,E-A). In fact, I know of no other characterization that fits the bill _ but in any event, let us define the structural fitness of an entity A in an environment E to be Em(A,E-A), and leave open the possibility that, in the future, some other rigorous definition of fitness will be proposed.

    Now, let us write E(t) for the state of the a system E at time t; and A(t) for the state of the component A at time t. E is to be thought of as an ecosystem, and A as a species; but I are speaking more generally with psychology and neuroscience in mind. Let P(A(t)) denote the population of A at time t. Let N(E) denote the average, over all A which belong to E at any time, of the correlation of the function Em(A(t),E(t)-A(t)) with the function P(A(t)) [this may be measured, for example, by the standard "correlation coefficient" from statistics]. The idea is that N(E) is large if, on the whole, those A with a high structural fitness tend to survive better. N(E) is the extent to which the system E evolves by natural selection.

    The structural version of the theory of natural selection, then, states that in real ecosystems N(E) is large. I have given an intuitive argument, bolstered by the example of immunological evolution, that this is in fact the case. Empirical demonstration is, of course, another matter; and it may well be that only a minority of what is conventionally considered evidence for natural selection is actually evidence for the present formulation of the theory of natural selection.

    This brings us back to the point made at the end of Chapter 1. If Em(A,E-A) is large for every or most species A in an ecosystem E, then S(E) is large, because S(E) contains Em(A,E-A) for every A. Therefore, the tendency of an ecosystem to evolve by natural selection implies the tendency of that ecosystem to have a high structural complexity, yielding the second half of the principle stated in rough form at the end of Chapter 1: the success of a system which evolves by natural selection is correlated with the smallness of its KCS complexity and the largeness of its structural complexity.

3.3 HIERARCHICAL EVOLUTION

We have now given a precise structural characterization of what it means for a system _ e.g. an ecosystem _ to evolve by natural selection. But the structural definition of "natural selection" looks rather different from Darwin's definition of natural selection, quoted at the beginning of the chapter.

    Darwin talked about organisms that interact and reproduce; we have only spoken about systems, components, and emergent pattern. In fact, so far as I know all previous evolutionary theories have used either the Darwinian language of interacting and reproducing organisms, or a derivative language of interacting and reproducing "units" (say, genes or populations or species).

    Strict Darwinism, following Darwin, identifies organisms as the unique entities which vary randomly and reproduce differentially. Dawkins, in The Selfish Gene, suggests that genes rather than organisms are the "units of selection" _ that the evolution of organisms is just a consequence of the random variation and differential reproduction of genes, so that a person is just a gene's way of making another gene. And, finally, a host of biologists have suggested that collections of organisms such as kin groups, species and populations can be understood as "units of selection," as entities which vary randomly and reproduce according to fitness.     

    But the definition of N(E) given above is "units-free" _ it does not tell us which entities within the system E are the "units of selection"; it doesn't care.

    In this section I will show how the system-theoretic concept of natural selection given in the previous section ties in with these "units"-based versions of natural selection. We shall see that in many ways it is an advantage to have a theory that does not need to know in advance what the "units of selection" are.

    N(E) Is a measure of correlation over all "subsystems" A of E, and this collection of subsystems may be chosen to include genes, organisms, populations and species. Em(A,E-A) may be large for a certain species, and for a certain organism within that species, and for a certain gene within that organism _ but this says nothing about which one is the "actual unit of selection."

    Let us begin with a cursory review of the evidence that the organism is not be the only unit of selection.

    First of all, it has been known for twenty years that natural selection may occur beneath the organismic level. For instance, Lewontin(1970, p. 5) has developed the concepts of gametic selection and organelle selection, and pointed out evolutionary phenomena which are clearly inexplicable in terms of selection on any higher level.

    Even more microscopically, it is possible to explain evolution as a "molecular struggle for existence within the DNA of the chromosomes, using the process of natural selection" (Orgel and Crick, 1980). According to this philosophy, it is the genes which act so as to optimize their chance of survival, by systematically mutating so as to produce successful organisms. This is almost a triviality, since in the standard picture of evolution it is the genes, more directly than the organisms, which are mutating and reproducing. As the aphorism goes, in this view an organism is a gene's way of making another gene.

    On the other side of the organismic level, there is there is the intriguing possibility, raised with particular interest in the context of the theory of punctuated equilibrium (Eldredge and Gould, 1972), that populations, species or higher taxa may evolve by natural selection in the same sense that organisms do (Matessi and Jayakar, 1976; Wade and McCauley, 1980; Boorman, 1909; Uyenoyama, 1979).

    To see how this would be possible, consider the evolution of bad taste in prey, such as butterflies (Michod, 1984). It is to the advantage of a butterfly species that is often eaten if it tastes bad _ that way, predators will eat a few members of the species realize how bad they taste, and go on to eat some other species of butterfly instead.

    But if a whole species of butterflies tastes good, there is no advantage for only one butterfly within that species to taste bad. After all, even if a predator eats it and subsequently gives up on the whole species - the specific butterfly is already dead. Bad taste benefits the species, but if one assumes that genes or organisms are the units of selection, it is hard to see how bad taste would ever evolve. If a mutation promoting bad taste occurred in one butterfly, what would cause that butterfly to create any more offspring than any other butterfly?

    One solution to this riddle is as follows. Populations are finite, not infinite, and are therefore susceptible to fluke effects of non-negligible probability _ a phenomenon called "genetic drift." Perhaps bad-tasting butterflies are not all that uncommon, so that every decent-sized population contains a few. Then, over a long period of time, there will probably arise a population containing more than a few _ a large enough proportion to encourage a handful of predators to eat some other type of butterfly instead. And this population of butterflies, with its unusually large proportion of bad-tasting butterflies, will flourish more than other populations without enough bad-tasting butterflies to dissuade predators. This leads up to a weak but simple example of higher-level selection: among butterfly populations with more than a minimal proportion of bad-tasting butterflies, the more bad-tasting butterflies a population has, the more likely it will be to survive and grow.

    A more interesting but much less clear-cut example of apparent higher-level selection is the development of sexual reproduction. It is difficult to see how sexual reproduction could arise as a result of natural selection on the organism level: not only is no one organism is made more viable because it reproduces sexually; but no one organism can, in itself, reproduce sexually. However, a group composed of sexually reproducing organisms may have certain long-term advantages.

    For instance, it is clear from the mathematics of mutation and reproduction that species which reproduce sexually will tend to produce a greater variety of offspring than species which reproduce asexually. The child of an amoeba is more similar to its parent than the child of a person or lizard or a bug is to its parents. This is because the genetic material of a product of sexual reproduction is not an exact copy of the genetic material of either of its parents. From this it follows _ reasoning very loosely _ that species which reproduce sexually may tend to survive better in environments which change relatively rapidly. Therefore, it might seem that a certain amount of natural selection on the species level is at work here, selecting those species which reproduce sexually. Species mutate into different species, and those species which reproduce sexually have a greater probability of survival in certain environments.

    This argument, though tantalizing, is a little dubious. Another way to look at the evolution of sex is via the "selfish gene" concept. It is to the benefit of a gene pool to develop in its corres-ponding organisms the trait of sexual reproduction. In this example gene-level selection and species-level selection are intuitively similar _ they are both more plausible than old-fashioned organism-level selection. In sum, the evolution of sexual reproduction is highly confusing; it is not clear on the face of it what is going on. Is the relevant unit of selection the species or some higher taxon, is it the individual organism, or is it the gene?

    One approach to the resolution of this sort of confusion is the emergence criterion of Vrba, Eldredge and Gould (Vrba, 1984; Vrba and Eldredge, 1984; Vrba and Gould, 1986). According to their approach, for each level, the set of all properties of entities on that level is partitioned into two disjoint sets: aggregate properties which are deducible from the inherent properties of lower-level components, and emergent properties which arise from the organization of lower-level components. Selection on any level, then, is defined as "interactions between heritable, emergentcharacter variation and the environment that causes differences in rates of birth or death among varying individuals" (Vrba and Gould, 1986, p. 219).     

    However, Lloyd (1988, pp. 101-105) correctly dismisses this simple-minded "emergence" approach, contending that the requirement of emergent, adaptive characters for a unit of selection is too strong. Evolution by selection at a level does not imply the existence of adaptations at that level; hence, their argument justifying the need for emergent properties is flawed (1988, p.107).

In general, there seems to be a great deal of confusion in the literature regarding the concept of hierarchical selection. In (Goertzel, 1991b) I have outlined a few of the serious problems with the standard approaches, and suggested an alternative. However, as pointed out by Stan Salthe (personal communication), the alternative given there is not satisfactory for the present purposes, since it takes a strict Darwinist approach to fitness: it assumes that fitnesses are a priori given. Similarly, Sober's (1984) probabilistic approach [which is not discussed in (Goertzel, 1991b)] seems to hold some promise for clarifying the issues involved, but it also takes fitnesses as a priori given. Here, therefore, we shall simply ignore all prior approaches and present a structural point of view.     

    So, finally, consider a system E, such as an ecosystem, which evolves by natural selection; and suppose that certain entities within E may be organized into classes L1,...,Ln so that every entity in Li is part of a certain entity in Li+1. For instance, we might have L1=genes, L2=organisms, L3=populations, L4=species. What we want to know is: on what level i _ if any _ does the system E's evolution by natural selection occur?

    Each level has a certain characteristic "evolvability" ev(Li), which may be defined as the correlation of the structural fitness of entities on that level with the survival rate of entities on that level (as discussed above when natural selection was defined). The question is: does the evolvability which is indicated by a positive ev(Li) come from mutation, reproduction and selection of entities on level i? Each entity y on level i may be expressed as a function of entities x1,...,xn on lower levels, and possibly other entities not included in the hierarchy of classes Li. Calling these other entities, collectively, z, we may write y=f(x1,...,xn,z). For instance, a species is determined by its member organisms. And an organism is determined largely, though not completely, by itschromosomes. To ask how much y itself directly evolves by natural selection, then, is to ask whether the more prominent patterns in Em(y,

E % y) _ those which mainly determine the correlation with the survival of y _ are actually patterns in y as a whole, or are more fundamentally patterns in the elements of y. Thus, the extent to which y, as a whole, evolves by natural selection - or the holistic structural fitness of y, relative to the hierarchy L1,...,Ln - may be defined by:

    HSF(y) = %Em(y,E % y) % Em(x1,E % x1) % ... % Em(xn,E % xn)%

where the minus sign indicates, of course, set difference; and the % % indicates the size as defined in Chapter 1, and used in the definition of structural complexity.

    And what does it mean to ask, say, whether the evolution of sex is a species-level or organism-level phenomenon? One is simply asking: what is the degree of membership of the pattern of sexual reproduction in HSF(y), for differently? This is all very schematic: I must admit that I do not know exactly how to apply these generalities to the problem of the evolution of sex. But it is a very interesting problem.

    In conclusion, we have provided a point of contact between the system-level definition of natural selection given above, and the Darwinian concept of natural selection as something involving units of selection. The system-theoretic, structural perspective has the advantage of not being tied to any particular set of units _ and, consequently, it does not resolve the question of how frequently species, organism or gene level selection occurs is another matter. However, all of these forms of natural selection are sensible and possible within the structural perspective.

3.4 IS MUTATION RANDOM?

As I have already pointed out, the arguments of Augros and Stanciu all regard relative frequency: they claim that self-organization plays a large role in evolution, whereas competitive struggle and trait-by-trait optimization play a small role.

    A. Lima de Faria, on the other hand, has made a much more radical argument against strict Darwinism, the substance of which is evident from the title of his book: Evolution without Selection. Like Augros and Stanciu, Lima de Faria has assembled long lists of examples in favor of his argument; here we shall review only the most crucial points.

    First of all, Lima de Faria proposes that evolution is a purely deterministic, "ordered" process, which will someday be completely analyzed in terms of the laws of physics. That is, he doubts if the "random" mutations of strict Darwinism are actually random. This is a provocative stance _ but while Lima de Faria overstates his case somewhat, the basic point is a good one.

    Let us begin by disposing of Lima de Faria's more sophistical arguments. Quantum physics _ which is known to play a role in molecular biology and suspected to play a role in the behavior of the brain (Penrose, 1989) _ is a probabilistic theory. Quantum reality is not deterministic. Lima de Faria, however, refers to "several physicists, especially Einstein, who have emphasied the limitations of the statistical approach" (1988, p. 300). While it is true that Einstein, Bohm and other distinguished physicists have conjectured that quantum physics will someday be supplanted by a purely deterministic theory, this is a very weak foundation on which to base a biological argument.

    Secondly, while Lima de Faria points out that "several physical phenomena, that were previously supposed to be random, turn out, when better analyzed, to be ordered" (1988, p. 300), he does not fully explore the implications of this revolutionary discovery. It is true that recent advances in dynamical systems theory have permitted us to analyze the equations underlying such random-looking phenomena as the beating of a heart, the motion of a damped forced oscillator, or swinging of a pendulum which is allowed to move freely in all directions. But the primary result here has been that, although there are simple deterministic equations underlying these phenomena, the resulting dynamics are for all practical purposes totally random. This means that: 1) they are difficult to predict in the sense that two systems which are very similar at one time may be extremely different later, 2) a record of their evolution over time is statistically indistinguishable from the record of a random process.

    This may be phrased in another way. Suppose one sought to approximate the equations which govern dynamical systems such as hearts or pendulums by Turing machine computations _ e.g. by cellular automata models. Then one can show that these discrete models would have close to maximal KCS complexity. And it seems very, very likely _ though a formal demonstration may be difficult _ that they would have close to maximal depth complexity.

    As discussed in the Introduction, dynamical systems with properties such as this are said to express deterministic chaos (Gleick, 1989; Devaney, 1990). I cannot see why it matters whether mutations are produced by true randomness or deterministic chaos. After all,algorithmic information theory shows that it is impossible to tell one from the other.

    But all this leads up to a more interesting possibility. It is quite possible that mutations are produced by complex molecular systems which 1) have a "truly" random element due to quantum effects, and 2) have a deterministic component which is usually or largely chaotic, but sometimes expresses some form of order. It is not implausible that some of what we call random mutation is actually ordered in a subtle way, and that this order manifests itself in the structure of evolution.

    What sort of order am I talking about? Here is one possibility: what if the mutations that occur in one generation are biased by those mutations that occurred in the previous generation? For instance, what if mutations are more likely in genes that are related to genes that have been mutated in recent history? Would this not increase the probability of substantial coordinated structural changes? On the crudest level, this would mean that, in human evolution, ten consecutive changes to the brain are more likely than ten consecutive changes to ten different organs. There is no biological reason to believe that this is the case, but on the other hand we don't know enough genetics to show that it isn't.     Lima de Faria has raised an important point. Although it doesn't matter whether mutation is truly random or deterministically chaotic, it is possible that mutation has some subtle statistical properties that molecular biology has not yet been able to discern. If this is the case, then the concept of "random variation" is not entirely correct _ where strict Darwinism and even self-organizationists like Gould see randomness, there is actually meaningful structure.

    I am not promoting Lamarckism _ the inheritance of acquired characteristics _ nor any other variety of the claim that variations tend to occur in useful directions. I am merely pointing out that, although the process of variation does not know what is useful, it may still display certain patterns that significantly affect the course of evolution. I am not sure that I fully understand what Lima de Faria had in mind in his book _ as made plain above, I cannot accept all of his arguments. However, I agree with Lima de Faria's intuition that the structure of supposedly "random" mutation is an important area for investigation.

    This point is not essential to the main thrust of this book _ in fact, it is perhaps the most radical of the "side hypotheses" mentioned in the Introduction. We are no longer debating matters of relative frequency. We are assaulting one of the cherished principles of strict Darwinism, a principle so hallowed that it often goes unspoken: the independence of mutation and selection. We are proposing that self-organizing processeswithin an organism and its genome can affect the nature of mutation.

    Let's get a little more concrete. We have been talking about hidden order underlying apparently random variation. One particularly striking example of this is the phenomenon of directed evolution, recently observed in the laboratory by John Cairns, Barry Hall and others. This phenomenon is the most serious challenge that the doctrine of the independence of mutation and selection has ever received.

    Cairns worked with bacteria containing a gene which had to mutate in order for the organism to ferment the sugar in which it was growing. Bacteria which failed to mutate appropriately did not immediately die: they had no energy source but still survived for quite a while. What he discovered was that the number of generations required for the evolution of appropriate mutations is tremendously less than that which the straightforward mutation-selection model predicts.

    Hall took the idea one step further, using a bacteria which must mutate twice in order to grow in the sugar salicin. One of the two required mutations is, under ordinary circumstances, so rare as to be virtually undetectable: it occurs less than twice per every trillion cells. Yet, given the need, Hall's strains of bacteria developed the required mutations in over one in one hundred cells. The message is clear: if you challenge cells, mutations are there; if you don't challenge them, they aren't. Biologists have not yet isolated the cellular mechanism underlying this phenomenon, but the statistical evidence is plain. Somehow, mutation is not truly random, it directs itself toward the solution of the problem at hand.

    One possible way to achieve this effect would be the systematic variance of mutation rate. Perhaps an species, confronted with a life-threatening problem, mutates wildly when the answer is distant, and more restrainedly when the answer is judged to be near. This fits in with the suggestion made at the end of the previous section: that although variation does not know what mutations are going to be useful, it may still display significant and relevant patterns.

    This is a radical biological hypothesis, and one which is not strictly needed for the main goal of this book _ the evolutionary analysis of mind. But it is intriguing, and not implausible. Is it really so unlikely that, when an organism is in mortal danger, it sends a message to the DNA of its incipient offspring to mutate more? If this is all that is required in order to make sense of the results of Cairns and Hall _ then it is not too extravagant an hypothesis, when one considers the remarkable nature of the experimental results being explained.

    In a very general way, this concept fits in with observed patternsof macroevolution. For instance, the Cambrian extinction was by far the most severe extinction in the history of life on earth. And it was followed by the Cambrian explosion _ a tremendous diversity of new organisms, possessing a disparity in form never seen before or since. No one understands why the Cambrian explosion led to so many totally new body plans (Gould, 1989). Could part of the reason have been that the uniquely widespread danger of the Cambrian extinction promoted generally large mutation rates?

3.5 THE GENERATION OF FORM

So far we have considered only one of Lima de Faria's claims: that the supposedly random mutations required for natural selection are not actually random. More central and more convincing is his observation that "the gene does not create form or function," and his argument that, in general, natural selection is incapable of generating fundamentally new structures.

    First of all, Lima de Faria cites well-known studies of bone and shell formation which "elucidate the role of minerals and that of genes in the formation of biological pattern," to the effect that "the gene products only function as secondary components leading to the fixation of a given alternative." The shell of a mollusc, and the skeleton of a sea urchin, are constructed by the self-organizing dynamics of calcium carbonate crystals: the genes only provide these dynamics with an initial configuration.

    Of course, what is true of bones and shells may not be true of soft tissues. But such tissues are, of course, also made of mineral components _ calcium atoms are common messengers in the cells of soft tissues. The self-organizing dynamics of soft tissues would be expected to be more complex _ after all, soft tissues would appear to have higher KCS and structural complexity _ but this is no reason to assume that such dynamics are not present.

    In fact, what is known of the bones is also strongly suspected to hold true at the opposite end of the complexity spectrum _ the most structurally complex of all the organs, the brain. Changeaux (1985) and Edelman (1989) have independently observed that the DNA cannot possible contain enough information to specify all the connections between the cells of the brain. They have both hypothesized that these connections are formed by a self-organizing process in the developing brain, a process by which the connections attempt to iteratively settle into a maximally consistent and useful pattern. Edelman's theory of adhesion moleculesprovides a detailed mechanism by which such a process could take place.     So Lima de Faria's claim here is very plausible, almost irrefutable: what the genetic information does is to provide guidance for self-organizing physical/chemical/biological processes.

    He goes on to suggest that there are certain universal self-organizing processes, which exist on all levels of self-organization: molecular, cellular, and organismic. To bolster this claim, he gives a gallery of pictures showing similar structures on different levels [Figures 10-14]. These pictures take on a new meaning in the context of Section 2, where we saw how very short and simple algorithms can generate complex, biological-looking forms. For instance, Figure 10 shows the structural similarity of ice crystals on glass, the feather of a young bird, the RNA molecules formed along a DNA axis, and a fir tree. In Section 2 we saw how a fern can be generated by a few simple linear transformations. Similarly, each of these illustrations could be generated by a compact algorithm. And it seems likely that these algorithms would be closely related _ so that each of the four phenomena could be understood as a different modification of the same basic program.

    Just as all snowflakes follow the same six-sided roughly fractal form, and yet look subtly different _ so, proposes Lima de Faria, do all organic forms emanate from the same few structural plans. One piece of evidence in favor of this claim is the observation that a small change in one of the "parameters" of an organism can cause a huge change in its actual structure. For instance, it is well known that, in order to get a panda from an ordinary bear, one need only make a small number of genetic changes. And, citing a very well known example, Lima de Faria observes that

    the "conquest of the land" by the vertebrates is achieved by a tenfold increase in thyroid hormone levels in the blood of a tadpole. This small molecule is responsible for the irreversible changes that oblige the animal to change from an aquatic to a terrestrial mode of life. The transformation involves the reabsorption of the tail, the change to a pulmonary respiration and other drastic modifications of the body interior.... If the thyroid gland is removed from a developing frog embryo, metamorphosis does not occur and the animal continues to grow, preserving the aquatic structures and functions of the tadpole. If the thyroid hormone is injected into such a giant tadpole it gets transformed into a frog with terrestrial characteristics....

         There are species of amphibians which represent a fixation of the transition stage between the aquatic and the terrestrial form. In them, the adult stage, characterized by reproduction, occurs when they still have a flat tail, respire by gills and live in water. One example is... the mud-puppy.... Another is... the Mexican axolotl.

        The demonstration that these species represent transitional physiological stages was obtained by administering the thyroid hormone to axolotls. Following this chemical signal their metamorphosis proceeded and they acquired terrestrial characteristics (round tail and aerial respiration). (1988, p. 241)

The point is that the huge structural difference between a land animal and a water animal does not necessarily reflect a huge difference of underlying program: the two animals may result from very slightly different modifications of the same algorithm. Muller (1990) has recently made the same point, in a different context.

    Finally, a different sort of example of an "universal self-organizing principle" was given by Stephen Jay Gould in Hen's Teeth and Horse's Toes:

        In studying the relationship of brain size to body size, biologists find that brains increase only one-fifth to two-fifths as fast as bodies in comparisons of closely related mammals differing only (or primarily) in body size _ adults within a single species, breeds of domestic dogs, chimpanzees versus gorillas, for example. For ninety years, the large literature has centered on speculations about the adaptive reasons for this relationship, based upon the (usually unstated) assumption that it must arise as the direct product of natural selection.

        But my colleague Russell Lande recently called my attention to several experiments on mice selected over several generations for larger body size alone. As these mice increased in size across generations, their brains enlarged at the characteristic rate _ a bit more than one-fifth as fast as body size. Since we know that these experiments included no selection upon brain size, the one-fifth rate must be a side product of selected increase in body size alone.

The ratio of brain size to body size is a universal morphological principle _ it is caused by physical constraints and dynamics alone, not by evolutionary forces.

    Lima de Faria's major claim is that the development of organisms is controlled by universal organizing processes. As I see it, however, this does not imply that selection is unimportant to evolution. What it does imply is that it is pointless to ask of every detail in the structure of an organism: "How does this make the organism more fit?" It is a blow against strict Darwinism, but not against the concept that natural selection is an important factor in evolution.

    For instance, the Irish elk Megaceros had giant antlers, up to 3.5 meters across, weighing nearly a quarter ton. It is difficult to see how these would have evolved by natural selection _ what use could they have possibly been? The "new biology" of Augros and Stanciu is of little use here: it seems that these antlers could have been of little use in cooperatively adapting to the overall ecosystem. The same argument could be made regarding the bark of the domestic dog, or the cumbersomely huge teeth of the sabertooth tiger, or the antlers of the contemporary female reindeer, all of which are apparently totally useless (the male reindeer uses its antlers to fight other males, but inter-female combat plays no known role). As Lima de Faria says, "the explanation of this phenomenon must be sought in the molecular determination of the cell and the chromosome which follow pathways canalized by their internal construction and which are partly independent of the animal's behavior and of its environment" (1988, p. 279).

    Lima de Faria dismisses the classical analysis of the bright coloration of tropical birds, which tends to be most prominent among males. Darwin coined the term "sexual selection" to refer to the process whereby coloration varies slightly over time, and the male birds with the coloration which is most attractive to females are gradually selected. Sexual selection may well play a role, but, Lima de Faria argues, vivid colors are so strongly correlated with temperature that this can be a minor role at best. Tropical birds, insects and fish all tend to be brightly and multiply colored, but birds, insects and fish from temperate or polar regions do not. And no one is suggesting that insects evolve by sexual selection.

    It seems to me that, based on the evidence which he presents, Lima de Faria's conclusion that "temperature and not sexual selection appears to be a major factor in the formation of the bright colors of animals" is not justified. I would say rather that the evidence suggestsimportant roles for both. The physico-chemical dynamics makes the bright colors possible, at which point natural selection can enter the scene. As Stephen Jay Gould has tirelessly argued, natural selection adjusts the parameters which other forms of dynamics identify as free for adjustment. It could be that the prominence of bright colors in males rather than females is a consequence of some subtle chemical phenomenon, but sexual selection is also a viable hypothesis. However, even if natural selection did play a role, it was subordinate: physico-chemical dynamics originated the bright coloration, and sexual selection then fiddled around with it.

    In conclusion, let me summarize what I take to be the crux of Lima de Faria's wide-ranging and ingenious argument. As will be elaborated below, I find his argument to be a little overstated. But it is powerful nonetheless. What Lima de Faria is saying is that natural selection is not, in itself, very creative. Natural selection permutes already-existent phenomena into different arrangements, and it modifies already-existent phenomena into slightly different phenomena. But there is absolutely no support for the claim that it has a non-negligible probability of creating something fundamentally new _ without the aid of other special, complicated dynamics. Whenever, in the course of evolution, something fundamentally new comes into the picture (e.g. a new species), we have every right to suspect that there is an underlying, self-organizing physical/chemical/biological dynamic at work. If natural selection ever seems to be creating something new, it is merely adjusting one of the "parameters" in an existing, adaptable structure _ just as by increasing the level of thyroxine, one can change an axolotl from a water-breathing animal into an air-breathing animal.

3.6 STRUCTURAL INSTABILITY

Lima de Faria does not tell us how biological form is actually generated. Instead, he takes the easy way out, stating repeatedly that "physical processes" can be counted on to do the trick. But, in accordance with the philosophy outlined in Chapter 1, we would prefer an analysis on the level of self-organization and biological form. Invoking an all-powerful, nonspecific set of "physical principles" is not really much better than invoking an all-powerful force of natural selection.

    To put it another way, there is no doubt that specific physical/ chemical/biological processes are involved, intricate and variegated in their details. But we may nonetheless ask: what kinds of processes arethese?

    It pays to be a little more precise. Let y and y% be any two programs, and let e.g. y*z denote the outcome of running the program y on the input z. For instance, y and y' could denote genetic codes, and z could be the set of all stimuli to the developing organism. Then the essential question is: what is the probability distribution of the "innovation ratio"

    d(S(y*z),S(y%*z))/d(y,y%) ?

That is: in general, when a program is changed by a certain amount, how much is the structure of the resulting organism changed? _ assuming a constant environment. If this ratio has a non-trivial probability of being large, then it is possible for small changes in program to lead to large changes in structure. If this ratio is virtually never large, then it is essentially impossible for natural selection to give rise to new form.

    I conjecture that natural selection can give rise to new form. This is not purely a mathematical conjecture. For, suppose the innovation ratio has a small non-negligible chance of being large, and there are specific "clusters" of programs _ specific regions in program space _ for which it is acceptably likely to be large. Then one would have the biological question of whether real organisms reside in these clusters, and how they get there and stay there.

    It might be fruitful to consider the possibility of varying the environment as well, e.g. to look at the probability distribution of

    d(S(y*z),S(y*z%))/d(z,z%).

This quantity might be called the "second innovation ratio". But I suspect that it is not nearly so essential as the innovation ratio proper. For instance, I conjecture that, where y is a genetic program and z and z% are environmental stimuli, d(S(y*z),S(y*z%))/d(z,z%) is usually, though perhaps not always, quite small. True, a huge change in environment can make a huge difference _ it can kill an organism, or cause it to develop the capacity to breathe air. But that means only that when d(z,z%) is large, d(S(y*z),S(y%*z%)) may also be large _ and this does not imply a large ratio.

    Let us define the structural instability of a program y to be the average, over all y%, of d(S(y*z),S(y%*z))/d(y,y%). In a system which evolves at least partly by natural selection, the tendency to the creation of new form may be rephrased as the tendency for organisms to fosterstructurally unstable programs.

    Many of the examples in D'Arcy Thompson's classic On Growth and Form may be interpreted as instances of structural instability. Thompson shows how the shapes of radically different-looking animals can be obtained as simple nonlinear coordinate transformations of one another _ so that a diverse population of different forms is reduced to one key shape, plus a low-algorithmic-complexity list of coordinate transformations.

    More recently _ in his book Ontogeny and Phylogeny (1978) and in his essays (1980, 1984) _ Stephen Jay Gould has gathered together a large number of biological examples of apparent structural instability. For instance, on the island of Mauritius there are two genera of boid snakes which share a feature present in no other terrestrial vertebrate: the maxillary bone of the upper jaw is split into front and rear halves, connected by a movable joint. Did this feature evolve all at once, or step by step? If one embraces the step-by-step hypothesis, one must explain why snakes with a one-tenth-, or one-fourth-, or half-broken jawbone would have been successful enough to predominate over their competitors with unbroken jawbones. And how can a jawbone be half-broken anyway? It seems plain that slow evolution through more and more broken jawbones makes no sense. A small, fluky genetic mutation must have caused the severed jawbone (a large structural change), all at once.

    Gould has a particularly personal interest in the concept of structural instability, since he is best known for his theory of punctuated equilibrium _ which, as mentioned above, states that the evolution of new species occurs in relatively rapid jumps, rather than as the consequence of the accretion of a huge number of tiny changes. Since only relatively minor genetic changes can occur in a short period of time, the theory of punctuated equilibrium is only reasonable if small genetic changes can lead to large changes in form and behavior.

    More specifically, Gould suggests that

        [T]he problem of reconciling evident discontinuity in macroevolution with Darwinism is largely solved by the observation that small changes early in embryology accumulate through growth to yield profound differences among adults. (1984, p. 182)

Not only does he believe in structural instability, but he thinks he knows where it is located: in those parts of the DNA that control processes taking place early in embryology.

    Evidence in favor of this idea is provided by biological oddities like hen embryos that can be induced to grow teeth by the introduction of proteins from mouse embryos. A small change in early embryology _ one new type of protein _ can set into motion processes culminating in the construction of a large physical entity like a tooth (Gould, 1984, p. 182).     

    Even more striking are the well-known "homeotic mutants" _ insects born with, say, legs where their jaws should be. It is truly incredible that, just by fiddling with tiny portions of a mosquito's DNA, one can completely disrupt a mosquito's body plan. As Gould puts it,

        [E]mbryology is a hierarchical system with surprisingly few master switches at high levels... and small genetic changes that happen to affect the switches might engender cascading effects throughout the body. Homeotic mutants teach us that small genetic changes can affect the switches and produce remarkable changes in an adult fly. (1984, p. 194)

    Finally, perhaps the most striking example of all is the evolution of the human brain. The two largest differences between humans and primates are our upright stance and our huge brain. Gould argues that, while the upright stance was a difficult evolutionary achievement, our large brains evolved according to a very simple strategy called neoteny. In brief, we are permanent juveniles _ we retain into adulthood the characteristics of the juveniles of our ancestor species, and one of these characteristics is a large ratio of brain size to body size. The astounding difference between the human brain and the brain of a monkey was achieved by the simple progressive adjustment of a small number of "rate" genes determining the pace of development.

        Bipedalism is no easy accomplishment. It requires a fundamental reconstruction of our anatomy, particularly of the foot and pelvis. Moreover, it represents an anatomical reconstruction outside the general pattern of human evolution.... [H]umans are neotenic _ we have evolved by retaining juvenile features of our ancestors. Our large brains, small jaws and a host of other features, ranging from distribution of bodily hair to ventral pointing of the vaginal canal, are consequences of eternal youth. But upright posture is a different phenomenon. It cannotbe achieved by the "easy" way ... for a baby's legs are relatively small and weak, while bipedal posture demands enlargement and strengthening of the legs.

This is a resounding affirmation of the lack of correlation between amount of genetic change and magnitude of structural effect. We shall return to it in Chapter 8.

    So, structural complexity does occur in biological systems. It appears that biological systems often drift into that portion of program space which contains structurally unstable programs. If one accepts this observation, however, then one is left with the question why. I agree with Gould that the answer lies in the hierarchical structure of embryological systems, mentioned above. But I am not sure that Gould has understood the complexity of the dynamics involved. Suppose one has a hierarchy of processes so that:

    1)Each process (except the lowest-level one) takes as input certain patterns in the output of the process one level lower than it.

    2)Each process is slightly structurally unstable

Then it follows almost immediately that the hierarchy as a whole forms an extremely structurally unstable process. A small change in the input to the lowest level will cause a slightly bigger change in the structure of the output of the lowest level, which will cause a slightly bigger change in the structure of the output of the next level up, which will cause a slightly bigger change in the structure of the output of the next level up, et cetera. This is, at very least, a very plausible explanation of the commonality of structural instability in biological systems. We shall return to the notion of hierarchy in embryology later, in Chapter 5.

    Finally, having discussed structural instability so extensively, I must pause to point out the other side of the coin. Structural instability is when small changes in genetic code lead to large changes in structure. But there are also plenty of cases where large changes in genetic code lead to relatively small changes in structure. This sort of situation has caused no end of difficulty for taxonomists (those biologists who concern themselves with the classification of organisms).

    For example, in Tasmania there is a "dog" which is actually a marsupial. It looks and acts like a dog, but genetically it is radically different from a dog; it incubates its young in a pouch, like a kangaroo, and it evolved from marsupial ancestors rather than the immediate ancestors of dogs. The similarities between the Tasmanian "dog" and the dog are merely analogies _ they are structures that arose separately in two totally different contexts. Similar examples abound: for instance, thelungfish is genetically closer to mammals than to ordinary fish, but which one does it look and behave like? The existence of similar structures in genetically different organisms is a strong argument in favor of Lima de Faria's concept of universal self-organizing structural principles. Universal self-organizing principles and structural instability are not opposed to one another; they work alongside each other to create the complex world in which we live.

3.7 STRUCTURAL INSTABILITY OF CELLULAR AUTOMATA

We have recited a long list of biological examples. For the next few sections, we shall consider a variety of computer simulations illustrating the same basic principles. To begin with, we shall briefly review some attempts to study structural instability in the context of cellular automata.

    First of all, it is easy to see that any simple program involving the rules of Life is likely to be fairly structurally unstable _ because any change to the rules of Life immediately eliminates the magnificent structure-producing properties of Life.

    Also, the structural instability of cellular automata was indirectly observed by Langton, Li and Packard (1991) in their study of one-dimensional cellular automata, mentioned above. They did not measure structure in the sense defined here, but they did measure entropy and a number of other complexity measures indirectly related to structure. They showed that, in certain regions of "rule space," small changes in rules can lead to drastic changes in these complexity measures.

    And, finally, in (Goertzel, 1992c) I reported on a series of computer experiments explicitly designed to estimate the structural instability of 1-dimensional cellular automata. Due to various technical obstacles, these experiments were never extended to a full-scale scientific study. However, their heuristic value is significant nonetheless.

    These experiments studied the two-dimensional arrays generated by repeatedly iterating one-dimensional cellular automata, with an aim of telling how much these arrays change (structurally) when one changes the iteration rules that generate them. Formally, define a one-dimensional cellular automaton mod k with span m as a rule that maps each length-k binary sequence s1...sk into another length-k binary sequence t1...tk, in such a way that the value of ta depends only on the values of sb, where (b%a) mod m is less than (k%1)/2. A cellular automaton mod k is determined by a function or "rule" mapping the set of all length-k binary sequences into the set {0,1}. The experiments reported in (Goertzel,1992c) considered the cases k=32 and k=128, and consisted of two phases: an exhaustive survey of the space of iteration rules of span 3 or less, and a directed search of the space of iteration rules of span 5 or less.

    I measured the distance between two two-dimensional binary arrays by an approximation to the structural distance, which instead of considering all patterns looked only at repetitive patterns. This was a sacrifice to practicality: the actual structural distance would have been impractical to compute, so I settled for an intuitively sensible underestimate. In an appendix to SI it is shown that repetitions make up a nonnegligible proportion of all patterns.

    One phenomenon easily observable in both phases of the experiment was the following: for a rule F which 1) assigns the output 0 to a great majority of its input configurations, or 2) assigns the output 1 to a great majority of its input configurations, changing the rule set slightly has very little effect upon the output. Thus, innovation ratio is strongly negatively correlated with dominance of any one number in the rule set.

    In a similar vein, I also observed that rules with high complexity tend to occur in clusters. And also, even among those rules with high complexity, those rules with particularly high innovation ratios tend to occur in clusters. Thus, amusingly, this simple toy model implied that evolutionary innovations should occur in (relatively) rapid cascades rather than as isolated incidents. Punctuated equilibrium once again.

    These results are intriguing because the cellular automaton is a sort of "paradigm case" of self-organization. The self-organizing processes underlying embryology are vastly more complex _ as indicated above, they have a subtle hierarchical structure. But if even the simplest self-organizing processes display structural instability, why should we doubt that more complex self-organizing processes also do?

3.8 FRACTALS AND STRUCTURAL INSTABILITY

We have seen _ from the very careful work of Li, Langton and Packard, and also from my own tentative experiments _ that 1-dimensional cellular automata are susceptible to structural instability. In fact, cellular automata appear to be much more unstable than the fractal-generating affine IFS's discussed in Chapter 1. Yet the affine IFS's are an easier way to produce intricate, striking pictures. In a sense, it seems that biological systems combine the visual fertility of affine IFS's with the structural instability of complexity-generating CA rules.

    One way of modeling this is with nonaffine iterated function systems. Let us temporarily adopt the notation of complex analysis, and instead of referring to the vector (x,y) let us speak of the point z=x+iy. Then we may consider, for example, the two-function IFS given by w1(z)=(z%c)1/2, w2(z)=%(z%c)1/2. Figure 15 shows the attractor of this IFS _ called the Julia set Jc of the IFS _ for various values of c.

    We may as well be a little more rigorous. The attractor A(w) of an IFS w = {w1,...,wN} is defined as the unique nonempty compact fixed point of the set-to-set operator W(A) = Uiwi(A). Not every IFS has an attractor, but if an attractor does exist it is easy to compute: one must merely choose an arbitrary x0 and set xn=wi(n)(xn-1) for n=1,2,3,..., where i(n) is chosen (pseudo)randomly from a uniform distribution on the set {1,...,N}. After a few iterates of transient behavior, the set {xm,xm+1,...} will begin to trace out an accurate approximation of the attractor.

    It is known that, in some cases, a tiny change in c can cause a marked change in the appearance of this set. The Mandelbrot set M, shown in Figure 16 for a linear mapping w(z), is the set of all c for which Jc is a connected set. As an aside, whenever Jc is not connected, it is totally disconnected _ it is a "Cantor dust." The boundary of M, which is remarkably complicated, divides those c which yield connected Julia sets from those c which yield totally disconnected Julia sets. A very small change can push you from one side to the other.

    Jc, which I have called the Julia set of the IFS {w1,w2}, is more commonly called the Julia set of the function fc(z)=z2+c, which is the inverse of the functions w1 and w2 introduced above. An equivalent definition of the Julia set of an analytic function fc depending on a parameter c is as follows: it is the boundary of the set of all z for which the sequence {z,fc(z),fc(fc(z)),fc(fc(fc(z))),...} is bounded. Using this definition, one may easily explore the Julia sets of other functions, such as fc(z)=c cos z, fc(z)=cez, etc. Intuitively, these sets seem to display truly remarkable structural instability. For example, when c decreases along the real line through .65, the Julia set of c cos z suddenly shrinks from filling the whole plane to filling only a small portion of the plane. Commercial video cassettes are available which display the dramatic structural instability of Julia sets by varying c continuously over time while maintaining a continual, vividly colored display of the Julia set of fc.

    These Julia sets are beautiful and eerie, and although they do not look much like real biological forms, they do demonstrate the power of nonlinear, highly structurally unstable processes to generate incredibly complicated structures. The study of nonlinear iterations is in its infancy,and at this point we still start with the mathematical equations and take whatever structures we can get. If the present rapid progress continues, however, we will in the not too distant future be able to program our computers to take in a description of a biological structure, and put out a concise set of equations which generates that structure rapidly.

3.9 ECOLOGICAL AND EVOLUTIONARY SELF-ORGANIZATION

We have been talking for a while about computer simulations which illustrate structural instability. But, of course, structural instability is only part of the story. The new forms which are generated by structural instability must interact with one another in a structured, systematic way, to form a cohesive evolutionary system. Let us next consider two very different computer simulations which illustrate the self-organizing properties of systems of organisms: Wissel's cellular automaton model of mosaic cycle ecosystems, and Kaufmann and Johnsen's brilliant, radical work on "coevolution at the edge of chaos."

Mosiac Cycle Ecosystems

As mentioned in Chapter 1, cellular automata have been used to model physical systems as diverse as fluids and immune systems. It is less well known that they have also been used to model ecosystems.

    This sort of modeling is very important because it fits in well with Lima de Faria's idea of universal principles of self-organization. He was thinking of self-organization within organisms, but in fact the cellular automaton models used to study fluid flow and immune systems within organisms are not all that different from the cellular automaton model that Wissel has used to study "mosaic cycle ecosystems."

    The mosaic-cycle theory states that an ecosystem is composed of a "mosaic" of patches. Each patch proceeds through the same natural cycle, but out of phase with the other patches. This concept was first proposed by Aubreville in 1938, as part of his study of the French West African jungle. However, it was Remmert (1991) who put the theory in general form and integrated it into modern ecological theory. Over the last two decades mosaic cycles have been observed a variety of ecological contexts, from the marine benthos to the African and Mongolian steppes.

    The idea is best illustrated through an example. Let us consider, following Remmert (1991), the central European primary forest, which exhibits a clear cyclic pattern: first a "climax phase" consisting of treesof approximately the same height and age, then a period of decay, then a phase of rejuvenation and then a climax phase again. Often, in any given forest, only a portion of the total area exists in climax phase at any given time.

    The reasons for this asynchrony are not entirely clear, and they probably differ from case to case. For instance, in the German beech/birch/mixed forest, one of the most important factors is the tendency of beech trees to sunburn. When a beech is exposed to excessive sun, its bark peels and splits off, which eventually leads to the death of the tree. And when a beech falls from excessive sun, the sun rays which killed it can then reach the trunk of the next tree, which suffers the same fate. The place of the beeches is taken by perennials, and then by birch trees, which have white bark and therefore reflect the rays of the sun very thoroughly. The birches are followed by a variety of tree species (e.g. maple, sycamore and cherry) whose bark is covered with cracks and hence insulates more effectively than the smooth bark of the beech. Finally, these give way to a thicket of young beeches, which eventually grow into a climax phase beech forest that can last up to 300 years. This process is not rigid _ often beech will follow beech, without any intervening stages. But it is an excellent example of the type of mechanism that may underly a mosaic cycle ecosystem.

    Wissel (1991) has given an intriguing cellular automaton model of the beech/birch/mixed forest. He considers a rectangular forest which is a lattice of square patches, as shown in Figure 18. Each patch is assigned an index (i,j), where i and j are the row and column in which the patch sits. The state of the patch (i,j) is described by a number Z(i,j), which may be interpreted as follows:

0    <     Z(i,j)    <     3: There is an opening in square (i,j)

2    <     Z(i,j)     <     8: There are birch trees in square (i,j),

                 of age Z(i,j)-2 decades

7    <     Z(i,j)     <     23: There is an mixed forest in square

                 (i,j), of age Z(i,j)-7 decades

22    <    Z(i,j)    <    26: There is a beech thicket in square

                 (i,j), of age Z(i,j)-22 decades

25    <    Z(i,j)    <    51: There are adult beech trees in square                  (i,j), of age Z(i,j)-22 decades

50    <    Z(i,j)    <    56: There are old beech trees in square (i,j),                  of age Z(i,j)-22) decades

The cyclic dynamics discussed above is placed on each patch bydiscretizing time into units of ten years. After each time step of ten years, the value of Z(i,j) is increased by 1 _ that is, if Z(i,j) < 55, Z(i,j) is replaced with Z(i,j)+1, and if Z(i,j)=55, Z(i,j) is replaced by 1.

    Finally, asynchrony is induced by the following collection of rules. If Z(i,j)=1 for a certain square and one of its eight neighbors contains birch trees, an early colonization by birch trees occurs: Z(i,j) is set to 3 instead of 2 at the next time step. If there are no birch trees in this immediate neighborhood, but there is mixed forest, a colonization by mixed forest occurs: Z(i,j) is set to 8 rather than 2. If Z(i,j) is between 51 and 55, then one has what is called a "dieback" phase, in which old beech trees may die for a number of different reasons. This is modeled by assuming that, at each time step, Z(i,j) is set back to 1 with probability P0.

    Finally, if an old beech tree falls, the neighboring trees to the north are suddenly exposed to direct sunlight and will probably die within around ten years. The probability of death depends upon the exact direction from which the sunlight comes. Wissel's rule is that Z(i,j) is set back to 1 with probability Ps if Z(i,j+1) = 1 or 2, with probability Pw if Z(i%1,j)= 1 or 2, and with probability Pe if Z(i+1,j+1)= 1 or 2.

    The trees on the boundary of the forest are not shielded. One may either assume that they are shielded by shrubbery, or impose periodic boundary conditions. The final results are mainly independent of this decision, and they are also independent of the particular values of P0, Pw, Pe, and Pw. Minor modifications in the update rules have also been shown not to have significant effects.

    What are the results of this experiment? First of all, the number of beech squares is periodic, as is the mean age of the beech trees. Other quantities turn out to be periodic as well. But the big news is that certain spatial patterns spontaneously emerge. Beginning from a random distribution of patches, a semi-ordered arrangement of patches like that shown in Figure 19 almost inevitably evolves (for frequency analysis see Figure 20). Thus, in this very simple context, ecological dynamics are self-organizing. As Wissel reports, it appears that some real forests display characteristic spatial patterns similar to those found in these simulations.

    Wissel's cellular automaton model appears to be structurally stable. Even if one fiddles a little with the parameters or the update rules, one still obtains qualitatively similar patterns. These are emergent patterns _ they are not present in any individual cell, but only in the conjunction of the cells, the entire forest.

    Incidentally, the evolutionary significance of the mosaic-cycletheory has yet to be explored. Clearly, fitness means something different in a forest characterized by mosaic-cycle patterns than it would in a random forest or in a forest consisting of one synchronous patch. This difference is insignificant for most organisms, but it could conceivably be significant for some. This may be an interesting concrete instance of the general principle that fitness depends on self-organizing ecological phenomena, which in turn rely upon the fitnesses of certain organisms.

Coevolution at the Edge of Chaos

The mosaic cycle theory is a particularly graphic illustration of ecological self-organization. It is most impressive in the way it relates cellular automata, a staple of the general theory of self-organizing systems, with relatively hard ecological data. The computer simulations of Kaufmann and Johnsen (1991), on the other hand, are not directly connected with empirical data. But they have a lot more to say about the relation between ecological self-organization and the process of evolution. These simulations depend upon Kaufmann's "NK model," so we shall begin by reviewing the properties of this extremely handy construction.

    For simplicity, assume that the genetic material of an organism is represented by a binary sequence of length N. Each place of the sequence represents a gene, a trait which may be either present or absent. The fitness contribution of each gene xi depends upon itself and epistatically on K other genes. Kaufmann's "NK model" assumes that these epistatic dependencies are so unpredictable that they may as well be considered random. To each possible combination of "inputs" to xi, Kaufmann assigns a random fitness contribution between 0 and 1. The fitness of the sequence is then defined as the average fitness of all its entries. Technically, this is an N locus two allele additive haploid genetic model.

    This model provides a fitness landscape whose ruggedness is "tunable" _ the larger K is, the less correlated are the fitnesses of similar sequences. For K=0, there is one particular fittest sequence, and the fitness of a arbitrary sequence may be roughly gauged by its distance to this sequence. For maximal K, K=N%1, the landscape is totally random: for example, knowing the fitness of 000001 tells you nothing about the fitness of 000011. Furthermore, as K goes to N%1, the average fitness of a local maximum tends toward .5, thus rendering evolutionary mutation almost useless. Roughly speaking, increasing K increases the evolutionary instability of the system.

    Kaufmann and Johnsen model an ecosystem with S species as a collection of S binary sequences of length N. Each species is representedby a sequence; hence it is tacitly assumed that all individuals of the same species are genetically identical. They generalize the NK model as follows: each trait in species i depends on K traits in species i and C traits in each of the other Si species with which species i interacts. The question which they ask is: under what conditions can true coevolution occur? When will a model ecosystem self-organize itself toward a generally higher level of fitness?

    It turns out that the quantity K/CSi is a sort of "order parameter." If this ratio is large for each species i, then the ecosystem as a whole quickly reaches a total or partial equilibrium state _ a state in which few or no species can improve their fitnesses by mutating a small amount. A partial equilibrium state is a state which is dominated by a frozen component _ a substantial segment of the ecosystem which is impervious to small changes. Recall that the immunological simulations discussed in the previous chapter also resulted in the emergence of a large frozen component under certain conditions. On the other hand, if the ratio K/CSi is small for each species, then the ecosystem appears to behave chaotically. The presence of "chaos" here has not been proved in a mathematical sense, but it is apparent upon inspection of the data.

    Finally, if the ratio K/CSi is neither too small nor too large, then the ecosystem is fairly likely to evolve into a sustained high-fitness configuration. The moral is that the ruggedness of the fitness landscape of each species must come partly from internal factors and partly from ecological factors, and these two parts must be in the proper balance. Coevolution is on the boundary between chaos and stagnation. The exact dynamics underlying this process are obscure. There is a lot of subtle instability involved _ for example, it is known that, in a simulated ecosystem, small perturbations can lead to dramatic events such as extinction (a species is considered extinct if its fitness is too close to 0).    

    The cellular automata results discussed eariler deal only with self-organization. The Kaufmann and Johnsen experiments deal with ecological evolution in a context of high evolutionary instability, but they ignore all the subtlety of intra-organismal self-organization. They are a fairly accurate model of the limiting case in which self-organization generates deterministic chaos. It is not clear, however, whether the results found in this special case will generalize to situations in which evolutionary instability is a consequence of high innovation ratio, rather than of simple randomness.

3.10 PATTERN DYNAMICS AND OTHER APPROACHES TO    SELF-ORGANIZING EVOLUTIONS

This chapter has been long and multifaceted. We began with a very simple argument against the strict Darwinist view of evolution, drawn from Augros and Stanciu's New Biology. This argument, combined with the treatment of the immune system given in Chapter 2, led us to an alternate point of view, a structurally-oriented, system-level formulation of the theory of natural selection.

    After giving this definition of natural selection, the discussion diverged into a number of different subject areas. We considered the possibility of non-random variation, the fact of structural instability, the relation of structural instability with embryology and self-organization, and the diverse self-organizing qualities of ecosystems. As warned at the beginning of Section 3.1, strict Darwinism is a complex theory, as is the self-organizational alternative.

    However, among all these diverse explorations, there has been one common thread. Recall Gregory Bateson's Metapattern, introduced in Chapter 1: in the biological world, everything is made of pattern. Our goal throughout this chapter has been to formulate the theory of natural selection in a way fully compatible with this axiom. We have defined natural selection as a certain correlation between survival rates and sets of emergent patterns. We have explained evolutionary innovation in terms of structural instability, a quantity defined in terms of pattern distance. We have explored the hypothesis that certain characteristic patterns underly apparently random variation. And we have looked at some emergent patterns in ecosystems and simulated ecosystems _ mosaic cycles, and the "frozen components" of Perelson's simulated immune systems and Johnsen and Kaufmann's simulated populations.

    This emphasis on pattern is what differentiates the present approach _ which I will call, from here on, the pattern dynamic approach _ from other "system-theoretic" analyses of evolutionary systems, such as Odum's (1983) energy-transfer ecology, and Prigogine's thermodynamically-inspired theory of complex system behavior (Prigogine et al, 1972, 1984, 1989). A brief consideration of these two highly productive research programmes may serve to highlight the innovative features of the present, pattern-based analysis.

Maximum-Power Evolution    

Like the pattern dynamic approach pursued here, Odum's analysis of ecosystems emphasizes self-organizing processes. But Odum formulatesthese processes in terms of energy flow. He defines emergy as "the energy of one type which is required to generate that of another type," and maximum-power emergy as "the least flow of emergy that is capable of generating the flow in question under conditions which maximize the power of the whole system." (1986, p. 341) Finally, transformity is defined as "the joules of one energy type hich are required to generate a joule of another type" (1986, p. 344) _ it comes in units of solar emjoules per joule. Figure 20 is a typical example of how Odum applies these ideas to actual ecological systems.

    Odum's main hypothesis is that evolutionary systems are self-organized for maximum power and thus, that a long-operating systems should have its observed emergy close to that which accompanies a design that maximize power. Incidentally, Swenson (1989) has given an alternative formulation of this idea, one which is in some respects superior. His approach is to add a "rate term" to the Second Law of Thermodynamics, yielding a revised Second Law which states not only that entropy is nondecreasing, but also that given a choice of paths toward entropy increase, a system will tend to choose that path giving maximum entropy increase.

    The intuition underlying both Swenson's and Odum's work is that the systems which provide maximum entropy increase, maximum power, will be systems with complex structures. How valid this intuition is, is not clear. If one could find a connection between maximum power or maximum entropy increase and some independent measure of complexity, such as structural complexity, then the thermodynamic approach to system complexity would be on much firmer footing.

    I am uncertain as to whether the hypothesis of maximum-power evolution is accurate; but it is an interesting idea, and it appears to often be useful. My point here, however, is not to debate whether the hypothesis is correct, but merely to highlight its fundamental difference from the structural approach taken here. "Power" and "energy" are physics ideas, and "emergy" and "transformity" are minor variations thereof. On the other hand, the structural approach taken here assumes from the very beginning that what makes complex self-organizing systems interesting are their pattern dynamics. Energy and power are certainly involved _ after all, the laws of physics show how organisms hold together, how they metabolize, and so forth. But the structural view focuses on patterns of matter and energy distribution, rather than numerical values of energy flow. Odum's diagrams contain a lot of information, but they will never tell you which qualities of an ecosystem emerge from which others; they speak a different language.

The Prigogine Programme    

Odum's ecology is surface-level, whereas the approach taken here delves _ in theory at least _ into the deep structures of things. However, the theory of pattern is not the only tool for talking about the structures and processes underlying complex systems. The research programme of Prigogine and his colleagues offers an alternative _ one which, like Odum's theory, is based on physics ideas rather than the theory of computation.

    The strength of Prigogine's programme is the weakness of the pattern-dynamic theory given here. The theory of pattern is, at present, disappointingly un-operational. It is easy to formulate general statements in the language of pattern, but as yet we have no simple techniques for applying these general statements to particular situations. The basic quantities involved _ emergence, structure, structural complexity, pattern distance _ are either impossible or extremely difficult to compute, depending on the context. In order to apply the theory of pattern to a variety of practical situation, one would need easy, well-understood techniques for approximating these quantities by restricting attention to certain types of patterns. One such technique _ looking only at repetitive patterns _ is described in (Goertzel, 1992c), but it has not yet been developed beyond the rudimentary stage.

    Prigogine's work, on the other hand, is based on modeling chemical and biological structures in terms of nonlinear differential equations, the solutions to which can be approximated by a host of well-known computational techniques. Like virtually everything else based on calculus, Prigogine's theories are wonderfully operational _ there are new, interesting and tractable examples around every corner.

    Prigogine's work has led to some important insights. To quote Toffler's beautiful Foreword to Order and Chaos once again:

        In Prigoginian terms, all systems contain subsystems, which are continually "fluctuating." At times, a single fluctuation or a combination of them may become so powerful, as a result of positive feedback, that it shatters the preexisting organization. At this revolutionary moment _ the authors call it a "singular moment" or a "bifurcation point" _ it is inherently impossible to determine in advance which direction change will take: whether the system will disintegrate into "chaos" or leap to a new, more differentiated, higher level of "order" ororganization, which they call a "dissipative structure"....

        [T]he system is pushed into a far-from-equilibrium condition, and here non-linear relationships prevail. In this state, systems do strange things. They become inordinately sensitive to external influences. Small inputs yield huge, startling effects. The entire system may reorganize itself in a way that strikes us as bizarre.

        Examples of such self-reorganization abound.... Heat moving evenly through a liquid suddenly, at a certain threshold, converts into a convection current that radically reorganizes the liquid, and millions of molecules, as if on cue, suddenly form themselves into hexagonal cells.

        ... [I]n far-from-equilibrium conditions, we find that very small perturbations or fluctuations can become amplified into gigantic, structure-breaking waves. This sheds light on all sorts of "qualitative" or "revolutionary" change processes.

What Toffler is talking about here is nothing other than structural instability. But, due to the awesome power of calculus, rather than simply defining structural instability, Prigogine and his colleagues are able to derive one example after another, using well-known mathematical methods.

    However, one must note that these examples do not involve biological structural instability. They are on a much, much lower level of complexity than, for instance, the alteration of an axolotl from a water-breathing to an air-breathing animal. No one is particularly confident that the same methods used to study self-organization in molecules will ever be up to the task of describing large-scale biological self-organization.

    For instance, in Exploring Complexity Nicolis and Prigogine review a differential-equations model of the immune system. They find what they looking for: the response of an immune system to an assault depends sensitively on its particular history. A tiny change in certain parameters can yield a large change in immune system behavior, a change which may well determine the fateof the organism. But this sort of analysis does not get at the subtler aspects of immune system structure and dynamics _ it is on a whole different level from Perelson's computer simulations, and the pattern-theoretic ideas of Section 3.3. The latter two analyses are mainly discrete in nature; one cannot use handy calculus tools to deal with them. But they are much more revealing of higher-order self-organizing properties of the immune system.

    It is interesting to observe that, in Exploring Complexity, Nicolis and Prigogine refer again and again to discrete computational ideas. This is a marked contrast to Prigogine's previous work. We find comments such as:

        Just as it becomes natural to describe the state of a non-equilibrium system in terms of the correlations between macroscopically separated elements rather than in terms of the intermolecular forces, and to describe the phenomenon of bifurcation in terms of the order parameter rather than in terms of the state variables initially present _ so it now appears that in certain classes of stochastic dynamical systems it becomes in turn natural to introduce a still higher level of abstraction and speak of symbols and information. (p. 191)

What they are talking about here is one of the biggest insights of modern dynamical systems theory: symbolic dynamics. This method of analysis has revealed that one way to study the behavior of a very complex continuous dynamical system _ stochastic or deterministic _ is to set up an appropriate discrete system (one which is topologically conjugate or semiconjugate to the original system), and study it instead. Devaney (1989) explains this methodology very well, in the context of deterministic dynamical systems. Symbolic dynamics is a very fertile field of study, which is continually yielding new connections between calculus concepts and discrete concepts _ for a very minor example from my own research, see (Goertzel, Bowman and Baker, 1992).

    The pattern dynamic approach says: if one needs to "speak of symbols and information" in order to deal with truly complex dynamical systems, why not just speak of only them, or at least consider them as primary? _ why not erect a whole theory of highly complex systems based on symbols and information, rather than on microscopic physical dynamics?

    But, in response to this objection, Nicolis and Prigogine have a ready-made retort. After introducing the concept of algorithmic complexity, and observing that while a random sequence has maximum algorithmic complexity, a totally repetitive sequence has minimum algorithmic complexity, they note that "the complexity of natural objects lies somewhere between these two extremes, since in addition to randomness it also involves some large scale regularities...." And then they claim that the self-organized states of matter allowed by non-equilibrium physics provide us with models of precisely this sort of complexity. Most important among these states ... is chaotic dynamics. Indeed, the instability of motion associated with chaos allows the system to explore its state space continuously, thereby creating information and complexity.... [W]e have finally identified a physical origin and some plausible mathematical archetypes of complexity. (p. 192)

They are claiming that "archetypal" structures of intermediate algorithmic complexity can be obtained through studying physical systems governed by simple differential equations. And, more radically, they are suggesting that chaos is the most fundamental concept for understanding these systems.

    This passage displays very clearly the philosophical differences between Prigogine's paradigm and the pattern dynamic point of view. I have been a big fan of Prigogine's work for years, and I have taught and studied chaos theory with great enthusiasm. However, I do not believe that chaos is the "most important" kind of dynamics for the study of complex systems. Chaos is simply the particular type of self-organizing dynamics that is most easily studied in terms of simple nonlinear differential and difference equations. It seems to me that Nicolis and Prigogine have at least partially succumbed to a very common fallacy: mistaking the models that one can work with most easily for those which best represent reality. The approach of the present book, on the other hand, involves seeking models that are intuitively faithful representations of reality _ and only then worrying about tractability. If a dynamical systems model simulates a variety of complex system behaviors, then it is a persistent pattern and deserving of much attention. But one should not assume that these are the only persistent patterns there.

Self-Structuring Systems    

Let us begin at the beginning again. Pattern dynamics holds that it is possible to analyze a mind, or any other highly complex self-organizing system, by diagramming the flow of pattern through the system. This is different from Odum-style complex systems theory, which diagrams the flow of various physical parameters. And it is also different from Prigogine-style complex-systems theory, which tries to explicitly obtainall concepts for analyzing complex patterns from simple microscopic dynamics.

    Let us take a good long look at the concept of "self-organization," which we have been using but have not yet tried to define. It is more than a little bit slippery. On the one hand, it refers to massively complex systems such as minds, brains, ecosystems and societies. But on the other hand, it is often used to describe chemical phenomena such as the Belousov-Zhabotinski reaction (Serra et al, 1986) and the Eigen-Schuster hypercycle (Jantsch, 1980). One may finesse this distinction by referring to "highly complex self-organizing systems" and the like, but a more careful treatment is desirable.

    It seems to me that self-organization is too general a concept. It is essentially negative: to say that a system is self-organizing is to say that its many component parts do not interact in an highly orderly or overly random way. Thus a general theory of self-organizing systems is no more likely than a general theory of non-linear equations. Just knowing what an entity is not does not usually give one much to go on.

    Prigogine (1972), Jantsch (1984), Swenson (1992) and others have approached self-organization from the chemical, thermodynamic side. As already stressed, this work is very interesting; it contains some powerful results regarding chemical systems of unpredictable structure. However, I am skeptical that it will ever lead to nontrivial conclusions regarding systems such as brains, ecosystems and societies. These systems are "self-structuring" in the sense that they recognize patterns in themselves and alter themselves accordingly. In this respect they are qualitatively different from those systems which the thermodynamic school has analyzed in detail.

    Formally, the concept of "self-structuring" may be expressed in terms of the sensitivities defined in SI, and to be reviewed in Chapter 6: a self-structuring system may be defined as a system which is "highly L.-, S.- and R.S.- sensitive, but not so highly S.S.-sensitive." This means, for example, that their future structure is tractably determinable from their past structure, even though their future state is not tractably determinable from their past state. I conjecture that brains, minds, ecosystems and societies fall into this category. They are not chaotic, and yet they are not orderly in the classical sense either _ they are systems of unpredictably fluctuating, self-organizing structure.

    This definition forms a link between intelligence and system complexity: it suggests the hypothesis that intelligence is possible only in a self-structuring environment. And it meshes very nicely with the idea, implicit in the theory of mind to be presented in Chapter 6, that intelligentsystems are necessarily self-structuring.

    Self-structuring systems stand in the same relation to the chemistry lab as proteins stand to the particle accelerator. Self-structuring systems are made of self-organizing chemical systems, and proteins are made of particles, but it would take a true breakthrough to put either of these theoretical reductions to practical use.

    Pattern dynamics, the study of the flow of pattern through a system, is relevant exclusively to self-structuring systems. It gives us no insight whatsoever into the Belousov-Zhabotinsky reaction, or the self-organizing dynamics underlying laser light. However, if a system is unpredictable on the level of exact physical parameters and yet somewhat predictable on the level of overall structure, then it is appropriate to look at the flow of patterns rather than the flow of physical quantities. That is the motivation behind the approach taken in this book.