Chaotic Logic -- Copyright Plenum Press © 1994 |
"Chaos theory" has, in the space of two decades, emerged from the scientific literature into the popular spotlight. Most recently, it received a co-starring role in the hit movie Jurassic Park. Chaos theory is billed as a revolutionary new way of thinking about complex systems -- brains, immune systems, atmospheres, ecosystems, you name it.
It is always nice to see science work its way into the mass media. But I must admit that, as a mathematician trained in chaotic dynamics, I find this sudden interest in chaos theory a little odd. The excitement about chaos theory stems from a perception that it somehow captures the complex "disorganized order" of the real world. But in fact, chaos theory in the technical sense has fewer well-developed real world applications than obscure areas of applied math like Lotka-Volterra equations, Markov chains, Hilbert spaces, and so forth. Where chaos is concerned, there is a rather large gap between the philosophical, prospective hype and the actual, present-day science.
To understand this gap in more detail, consider what one studies in a first course on chaos theory: discrete iterations like the tent map, the Baker map and the logistic iteration (Devaney, 1988); or else elementary nonlinear differential equations such as those leading to the Lorentz attractor. These systems are all "low-dimensional," in the sense that the state of the system at each time is specified by a single number, or a short list of numbers. And they are simple, in the sense that the rule which determines the state of the system at time t+1 from the state of the system at time t has a brief expression in terms of elementary arithmetic.
All these systems have one novel property in common: whatever state one starts the system in at "time zero," the odds are that before long the system will converge on a certain region of state space called the "attractor." The states of the system will then fluctuate around the "attractor" region forever, apparently at random. This is "chaos," a remarkable, intriguing phenomenon -- and a phenomenon which, on the surface at least, appears to have little to do with complex, self-organizing systems. It is obvious that complex systems are not pseudo-random in the same sense that these "toy model" dynamical systems are. Something more is going on.
One way to sidestep this problem is to posit that complex systems like brains present "high-dimensional dynamics with underlying low-dimensional chaos." There is, admittedly, some evidence for this view: mood cycles, nostril cycles and EEG patterns demonstrate low-dimensional chaotic attractors, as do aspects of animal behavior, and of course numerous parameters of complex weather systems.
But at bottom, the recourse to dimensionality is an evasive maneuver, not a useful explanation. The ideas of this book proceed from an alternative point of view: that complex, self-organizing systems, while unpredictable on the level of detail, are interestingly predictable on the level of structure. This what differentiates them from simple dynamical systems that are almost entirely unpredictable on the level of structure as well as the level of detail.
In other words, I suggest that the popular hype over chaos theory is actually an enthusiasm over the study of complex, self-organizing systems -- a study which is much less developed than technical chaos theory, but also far more pregnant with real-life applications. What most chaos theorists are currently doing is playing with simple low-dimensional "toy iterations"; but what most popular expositors of chaos are thinking about is the dynamics of partially predictable structure. Therefore, I suggest, it is time to shift the focus from simple numerical iterations to structure dynamics.
To understand what this means, it suffices to think a little about chaos psychology. Even though the dynamics of the mind/brain may be governed by a strange attractor, the structure of this strange attractor need not be as coarse as that of the Lorentz attractor, or the attractor of the logistic map. The structure of the strange attractor of a complex system contains a vast amount of information regarding the transitions from onepatterned system state to another. And this, not the chaos itself, is the interesting part.
Unfortunately, there is no apparent way to get at the structure of the strange attractor of a dynamical system like the brain, which presents hundreds of billions of interlinked variables even in the crudest formal models. Therefore, I propose, it is necessary to shift up from the level of physical parameters, and take a "process perspective" in which the mind and brain are viewed as networks of interacting, inter-creating processes.
The process perspective on complex systems has considerable conceptual advantages over a strictly physically-oriented viewpoint. It has a long and rich philosophical history, tracing back to Whitehead and Nietszche and, if one interprets it liberally enough, all the way back to the early Buddhist philosophers. But what has driven recent complex-systems researchers to a process view is not this history, but rather the inability of alternative methods to deal with the computational complexity of self-organizing systems.
George Kampis's (1991) Self-Modifying Systems presents a process perspective on complex systems in some detail, relating it with various ideas from chemistry, biology, philosophy and mathematics. Marvin Minsky's (1986) Society of Mind describes a process theory of mind; and although his theory is severely flawed by an over-reliance on ideas drawn from rule-based AI programs, it does represent a significant advance over standard "top-down" AI ideas. And, finally, Gerald Edelman's (1988) Neural Darwinism places the process view of the brain on a sound neurological basis.
Here, however, I will move far beyond neural Darwinism, societal computer architecture and component-system theory, and propose a precise cognitive equation, hypothesized to govern the creative evolution of the network of mental processes. When one views the mind and brain in terms of creative process dynamics rather than physical dynamics, one finds that fixed points and strange attractors take on a great deal of psychological meaning. Process dynamics give rise to highly structured strange attractors. Chaos is seen to be the substrate of a new and hitherto unsuspected kind of order.
Chaos theory proper is only a small part of the emerging paradigm of complex systems science. In thepopular literature the word "chaos" is often interpreted very loosely, perhaps even as a synonym for "complex systems science." But the distinction is an important one. Chaos theory has to do with determinism underlying apparent randomness. Complex systems science is more broadly concerned with the emergent, synergetic behaviors of systems composed of a large number of interacting parts.
To explain what complex systems science is all about, let me begin with some concrete examples. What follows is a a highly ideosyncratic "top twelve" list of some of the work in complex systems science that strikes me as most impressive. The order of the items in the list is random (or at least chaotic).
1. Alan Perelson, Rob deBoer (1990) and others have developed computer models of the immune system as a complex self-organizing system. Using these models, they have arrived at dozens of new predictions regarding immune optimization, immune memory, the connectivity structure of the immune network, and other important issues.
2. Stuart Kauffmann (1993) has, over the last three decades, systematically pursued computer simulations demonstrating the existence of "antichaos." He has found that random Boolean networks behave in a surprisingly structured way; and he has used these networks to model various biological and economic systems.
3. Gregory Bateson (1980) has modeled a variety of social and psychological situations using ideas from cybernetics. For instance, he has analyzed Balinese society as a "steady-state" system, and he has given system-theoretic analyses of psychological problems such as schizophrenia and alcoholism.
4. Gerald Edelman (1988) has devised a theory of brain function called Neural Darwinism, based on the idea that the brain, like the immune system, is a self-organizing evolving system. Similar ideas have been proposed by other neuroscientists, like Jean-Pierre Changeux (1985).
5. Starting from the classic work of Jason Brown (1988), a number of researchers have used the concept of "microgenesis" to explore the mind/brain as a self-organizing system. This point of view has been particularly fruitful for the study of linguistic disorders such as aphasia.
6. There is a very well-established research programme of using nonlinear differential equations and thermodynamics to study far-from-equilibrium self-organizing systems. The name most commonly associated with this programm is that of Ilya Prigogine (Prigogine and Stengers, 1984).
7. A diverse community of researchers (Anderson et al, 1987) have used ideas from stochastic fractal geometry and nonlinear differential equations to model the self-organization inherent in economic processes (such as the stock market).
8. G. Spencer Brown's classic book Laws of Form (1972) gives a simple mathematical formalism for dealing with self-referential processes. Louis Kauffmann (1986), Francisco Varela (1978) and others have developed these ideas and applied them to analyze complex systems such as immune systems, bodies and minds.
9. For the past few years the Santa Fe Institute has sponsored an annual workshop on "Artificial Life" (Langton, 1992) -- computer programs that simulate whole living environments. These programs provide valuable information as to the necessary and sufficient conditions for generating and maintaining complex, stable structures.
10. John Holland (1975) and his colleagues such as David Goldberg (1988) have constructed a research programme of "genetic optimization," in which computer simulations of evolving populations are used to solve mathematical problems.
11. Over the past decade, a loose-knit group of researchers from different fields have been exploring the applications of "cellular automata" to model various self-organizing phenomena, from fluid dynamics to immunodynamics. Cellular automata (Wolfram, 1986) are simple self-organizing systems that display many elegant emergent properties of an apparently "generic" character.
12. Vilmos Csanyi (1990), George Kampis (1991) and Robert Rosen (1992), among others, have kept alive the grand European tradition of General Systems Theory, using sophisticated ideas from mathematics and physical science to demonstrate that complex self-organizing systems must be understood to be creating themselves.
Complex systems science is not as yet an official academic discipline; there are no university departments of complex systems science. However, there are a few research institutes and professional organizations. For instance, the Santa Fe Institute has supported a wide variety of research in complex systems science, including the work on immunology, antichaos, artificial life and genetic optimization mentioned above. In recognition of these efforts, the Institute recently received a MacArthur Foundation "genius grant."
The Center for Complex Systems in Illinois has also, as one would expect from the name, been the location of a great deal of complex systems research, mainly dealing with applications of cellular automata. And, finally, the Society for Chaos Theory in Psychology, now in its third year, has served to bring together an impressive number of social, behavioral and physical scientists interested in studying the mind as a complex self-organizing system.
Parenthetically, it is worth noting that the battle for the word "chaos" is not yet over. A few weeks after I wrote the preceding paragraphs, I ran across an interesting discussion on the Internet computer network, which really drove this point home. Someone posted a news item on several computer bulletin boards, declaring the imminent creation of a new bulletin board focusing on chaos theory. The only problem remaining, the news item said, was the selection of a name. Many variations were suggested, from "sci.math.nonlinear" to "sci.emergence.chaos" to "sci.nonlinear" to "sci.chaos" to "sci.math.chaos" to "sci.complexity."
Most discussants rejected the names "sci.chaos and sci.math.chaos" as encouraging a mistakenly wide interpretation of the word "chaos." But the fact is that there are already several unofficial newsgroups dealing with the subject of complex systems science. And these are all named -- "sci.chaos"! No amount of rational argumentation can counteract a habit. This is nothing else but chaotic logic at work, in a wonderfully self-referential way. It is chaos regarding "chaos," but only if one accepts the result of this chaos, and calls complex systems science "chaos."
Perhaps one should not shed too many tears over the fact that the name "chaos theory" is at variance with standard mathematical usage. After all, mathematicians did not invent the word "chaos"! In its original theological meaning, "Chaos" simply referred to the void existing between Heaven and Earth. In other words, it had virtually nothing to do with any of its current meanings.
But anyhow, I am amused to report that the newsgroup finally took on the name "sci.nonlinear." This is also a misnomer, since many nonlinear systems of equations are neither chaotic nor self-organizing. Also, many complex systems have nothing to do with linear spaces and arehence not nonlinear but alinear. But, be that as it may, one may chalk up one for the anti-"chaos" forces!
All kidding aside, however, I do think that using the name "chaos theory" for complex systems science has one sigificant disadvantage. It perpetuates an historical falsehood, by obscuring the very deep connections between the modern theory of self-organizing systems and the "General Systems Theory" of the forties and fifties.
Today, it seems, the average scientist's opinion of General Systems Theory is not very good. One often hears comments to the effect that "There is no general systems theory. What theoretical statements could possibly be true of every system?" In actual fact, however, the General Systems Theory research programme was far from being a failure. Its many successes include Bateson's psychological theories, Ashby's work in cybernetics, McCulloch's groundbreaking work on neural networks, and a variety of ideas in the field of operations research.
The truth is simply that after a decade or two, General Systems Theory collapsed under the weight of its own ambitions. It was not proved "wrong" -- it said what it had to say, and then slowly disappeared. True, it did not turn out to be nearly as productive as its creators had envisioned; but this doesn't contradict the fact that it was very productive anyway.
What does modern complex systems science have that General Systems Theory did not? The answer, I suspect, is remarkably simple: computing power. Of the twelve contributions to complex systems science listed above, seven -- immune system modeling, "antichaos" modeling, far-from-equilibrium thermodynamics, artificial life, genetic optimization, cellular automata and fractal economics -- rely almost entirely on computer simulations of one sort or another. An eighth, Edelman's theory of Neural Darwinism, relies largely on computer simulations; and a ninth, Spencer-Brown's self-referential mathematics, was developed in the context of circuit design.
Computing power has not been the only important factor in the development of complex systems science. For example, the revolutionary neurobiological ideas of Edelman, Changeux, Brown and others would not have been possible without recent advances in experimental brain science. And my own work depends significantly not onlyon ideas derived from computer simulations, but also on the theory of algorithmic information (Chaitin, 1987), a branch of computer science that did not exist until the late 1960's. But still, it is fair to say that greater computing power was the main agent responsible for turning relatively sterile General Systems Theory into remarkably fertile complex systems science.
The systems theorists of the forties, fifties and sixties recognized, on an intuitive level, the riches to be found in the study of complex self-organizing systems. But, as they gradually realized, they lacked the tools with which to systematically compare their intuitions to real-world data. We now know quite specifically what it was they lacked: the ability to simulate complex processes numerically, and to represent the results of complex simulations pictorially. In a very concrete sense, today's "chaos theory" picks up where yesterday's General Systems Theory left off.
In the following pages, as I discuss various aspects of language, mind and reality, I will not often be directly concerned with computer simulations or technical mathematics. However, the underlying spirit of the book is inextricable from recent advances in mathematical chaos theory, and more generally in complex systems science. And these advances would not have been possible without 1) the philosophy of General Systems Theory, and 2) the frame of mind induced by modern computing power. Science, philosophy and technology are not easily separable.
Rather than letting historical reflection get the upper hand, I will end this section with a concrete example. The basic article of faith underlying complex systems science is that there are certain large-scale patterns common to the behavior of different self-organizing systems. And perhaps the simplest such pattern is the feedback structure -- the physical structure or dynamical process that not only maintains itself but is the agent for its own increase. Some specific examples of feedback structures are as follows:
1. Autocatalytic reactions in chemistry, such as the Belousov-Zhabotinsky reaction. Once these chemical reactions get started, they grow by feeding off themselves. Often the rate of growth fluctuates chaotically.
2. Increasing returns in economics. This refers to a situation in which the more something is sold, theeasier it becomes to sell. Such situations are apt to be unpredictable -- an historical example is the competition between VHS and Beta format videotapes.
3. Double binds in psychology. Gregory Bateson's groundbreaking theory of schizophrenia postulates feedback reactions between family members, according to which miscommunication leads to more miscommunication.
4. Chaos in immune systems. Mathematical models trace the dynamics of antibody types, as they stimulate one another to reproduce and then attack each other. In some cases this may result in concentrations of two antibody types escalating each other by positive feedback. In other cases it may result in low-level chaotic fluctuations.
Of course, feedback structures of a simple sort are present in simple systems as well as complex systems (every guitar player knows this). But the important observation is that feedback structures appear to be a crucial part of self-organization, regardless of the type of system involved. Parallels like this are what the complex-systems-science researcher is always looking for: they hint at general laws of behavior.
And indeed, the cognitive equation of Chapter Seven came about as an attempt to refine the notion of "complex feedback structure" into a precise, scientifically meaningful concept -- to rigorously distinguish between the intricate feedback structures present in economies and mind and the relatively simple feedback involved in a guitar solo.
In this book I will be concerned with four types of psychological systems: linguistic systems, belief systems, minds and realities. All of these systems, I suggest, are strange attractors of the dynamical system which I call the "cognitive equation." And they are, furthermore, related by the following system of "intuitive equations":
Linguistic system = syntactic system + semantic system
Belief system = linguistic system + self-generating system
Mind = dual network + belief systems
Reality = minds + shared belief system
The meanings of the terms in these four "equations" will be explained a little later. But the basic idea should be, if not "clear," at least not completelyblurry. The only important caveat is as follows: the use of the "+" sign should not be taken as a statement that the two entities on the right side of each equation have significant independent functionality. For instance, syntactic systems and semantic systems may be analyzed separately in many respects, but neither can truly function without the other.
A slightly more detailed explanation of the terms in these "equations" is as follows:
1) A linguistic system consists of a deductive,
transformational system called a syntactic system, and an interdefined collection of patterns called a semantic system, related according to a principle called continuous compositionality. This view explains the role of logic in reasoning, and the plausibility of the Sapir-Whorf hypothesis.
2) A self-generating system consists of a collection of stochastically computable processes which act on one another to create new processes of the same basic nature. The dynamics of mind may be understood in terms of the two processes of self-generation and pattern recognition; this idea yields the "cognitive equation."
3) A belief system is a linguistic system which is also a self-generating system. Belief systems may be thought of as the "immune system" of the mind; and, just like immune systems, they may function usefully or pathologically. They are a necessary complement to the fundamental dual network structure of mind (as outlined in The Evolving Mind).
4) Reality and the self may be viewed as two particularly powerful belief systems -- these are the "master belief systems," by analogy to which all other belief systems are formed.
Each of the "equations," as these explanations should make clear, represents a novel twist on a reasonably well-known idea. For instance, the idea of linguistics as semantics plus syntax is commonplace. But what is new here is 1) a pragmatic definition of "semantics," and 2) the concept of "continuous compositionality," by which syntactic and semantic systems are proposed to be connected.
Similarly, the idea that beliefs are linguistic is not a new one, nor is the idea that beliefs collectively act to create other beliefs. But the specific formulation of these ideas given here is quite novel, and leads to unprecedentedly clear conclusions regarding the validity of belief systems.
The idea that mind consists of a data structure populated by belief systems is fairly common in theAI/cognitive science community. But the relation between the belief system and the data structure has never been thoroughly examined from a system-theoretic point of view. Neither the role of feedback in belief maintenance, nor the analogy between immune systems and belief systems, has previously been adequately explored.
And finally, the view of reality as a collective construction has become more and more common over the past few decades, not only in the increasingly popular "New Age" literature but also in the intellectual community. However, up to this point it has been nothing more than a vague intuition. Never before has it been expressed in a logically rigorous way.
The cognitive equation underlies and guides all of these complex systemic dynamics. Elements of mind, language, belief and reality exist in a condition of constant chaotic fluctuation. The cognitive equation gives the overarching structure within which this creative chaos occurs; it gives the basic shape of the "strange attractor" that is the world.
More specifically, the assertion that each of these systems is an attractor for the cognitive equation has many interesting consequences. It implies that, as Whorf and Saussure claimed, languages are semantically closed, or very nearly so. It implies that belief systems are self-supporting -- although the nature of this self-support may vary depending on the rationality of the belief systems. It implies that perception, thought, action and emotion form an unbroken unity, each one contributing to the creation of the others. And it tells us that the relation between mind and reality is one of intersubjectivity: minds create a reality by sharing an appropriate type of belief system, and then they live in the reality which they create.
All this is obviously only a beginning: despite numerous examples, it is fairly abstract and general, and many details remain to be filled in. However, my goal in this book is not to provide a canon of unassailable facts, but rather to suggest a new framework for studying the remarkable phenomena of language, reason and belief. Three hundred years ago, Leibniz speculated about the possibility of giving an equation of mind. It seems to me that, with complex systems science, we have finally reached the point where we can take Leibniz seriously -- and transform his dream into a productive research programme.
In this section I will give an extremely compressed summary of the main ideas to be given in the following chapters. These ideas may be somewhat opaque without the explanations and examples given in the text; however, the reader deserves at least a vague idea of the structure of the arguments to come. For a more concrete idea of where all this is leading, the reader is invited to skip ahead to Chapter Eleven, where all the ideas of the previous chapters are integrated and applied to issues of practical human and machine psychology.
Chapters Two and Three: a review of the concepts of pattern, algorithmic information, associative memory and multilevel control. These ideas, discussed more thoroughly in SI and EM, provide a rigorous basis for the analysis of psychological phenomena on an abstract structural level. A special emphasis is placed here on the issue of parallel versus serial processing. The mind/brain, it is argued, is essentially a parallel processor ... but some processes, such as deductive logic, linguistic thought, and simulation of chaotic systems, involve virtual serial processing -- networks of processes that simulate serial computation by parallel operations.
Chapter Four: the first part of a multi-chapter analysis of the relationship between language and thought. Using the concept of a structured transformation system, I consider a very special kind of linguistic system, Boolean logic, with a focus on the well-known "paradoxes" which arise when Boolean logic is applied to everyday reasoning. I argue that these "paradoxes" disappear when Boolean reasoning is considered in the context of associative memory and multilevel control. This implies that there is nothing problematic about the mind using Boolean logic in appropriate circumstances -- a point which might seem to be obvious, if not for the fact that it has never been demonstrated before. The standard approach in formal logic is simply to ignore the paradoxes!
Chapter Five: this analysis of Boolean logic is extended to more general linguistic systems. It is argued that, as a matter of principle, a linguistic system cannot be understood except in the context of a particular mind. In this spirit, I give a new analysis of meaning, very different from the standard Tarski/Montague possible worlds approach. According to the newapproach, the meaning of a phrase is the set of all patterns associated with it. This implies that meaning is fundamentally systematic, because many of the patterns associated with a given phrase have to do with other phrases. In this view, it is not very insightful to think about the meaning of a linguistic entity in isolation. The concept of meaning is only truly meaningful in the context of a whole linguistic system -- which is in turn only meaningful in the context of some particular mind.
Chapter Six: the connections between language, logic, reality, thought and consciousness are explored in detail. First, the pattern-theoretic analysis of language is applied to one of the more controversial ideas in twentieth-century thought: the Sapir-Whorf hypothesis, which states that patterns of thought are controlled by patterns of language. Then I discuss the role of consciousness in integrating language with other thought processes. A new theory of consciousness is proposed, which clarifies both the biological bases of awareness and the fundamental relation between mind and the external world.
Chapter Seven: a brief excursion into the most impressive modern incarnation of General Systems Theory, George Kampis's theory of component-systems, which states that complex self-organizing systems construct themselves in a very basic sense. After reviewing and critiquing Kampis's ideas, I introduce the novel concept of a self-generating system.
Chapter Eight: formulates a "dynamical law for the mind," the cognitive equation. This is a dynamical iteration on the level of processes and structures rather than numerical variables. It is argued that complex systems such as minds and languages are attractors for this equation: they supply the structure overlying the chaos of mental dynamics. Learning, and in particular language acquisition, are explained in terms of the iteration of the cognitive equation.
Chapter Nine: having discussed linguistic systems and self-generating systems, I introduce a concept which synthesizes them both. This is the belief system. I argue that belief is, in its very essence, systematic -- that, just as it makes little sense to talk about the meaning of an isolated word or phrase, it makes little sense to talk about a single belief, in and of itself. Using examples from psychology and the history of science, I develop the idea that a belief system is a structured transformation system, fairly similar in construction to a language.
And in this context, I consider also the question of the quality of a belief system. If one takes the system-theoretic point of view, then it makes little sense to talk about the "correctness" or "incorrectness" of a single belief. However, it is possible to talk about a productive or unproductive belief system. Complex systems thinking does not prohibit normative judgements of beliefs, it just displaces these judgements from the individual-belief level to the level of belief systems.
Chapter Ten: continuing the analysis of belief, I put forth the argument that belief systems are functionally and structurally analogous to immune systems. Just as immune systems protect bodies from infections, belief systems protect expensive, high-level psychological procedures from input. A belief permits the mind to deal with something "automatically," thus protecting sophisticated, deliberative mental processes from having to deal with it. In this context, I discuss the Whorfian/Nietzschean hypothesis that self and external reality itself must be considered as belief systems.
Next, I propose that beliefs within belief systems can survive for two different reasons:
a) because they are useful for linguistic systems such as logic, or
b) because they are involved in a group of beliefs that mutually support each other regardless of external utility: i.e., because they are in themselves attractors of the cognitive equation
Good reasoning, I argue, is done by logical systems coupled with belief systems that support themselves mainly by process (a). On the other hand, faulty reasoning is done by logical systems coupled with belief systems that support themselves mainly by process (b).
Chapter Eleven: the relation between mind and reality is discussed from several different perspectives. First it is argued that self and reality are belief systems. Then hyperset theory and situation semantics are used to give a mathematical model of the universe in which mind and reality reciprocally contain one another. Finally, I present a series of philosophically suggestive speculations regarding the relation between psychology and quantum physics.
Chapter Twelve: the phenomenon of dissociation is used to integrate the ideas of the previous chapters into a cohesive model of mental dynamics. It is argued that minds naturally tend to separate into partially disconnected subnetworks, with significantly independent functionality. This sort of dissociation has traditionally been associated with mental disorders such as multiple personality disorder and post-traumatic stress syndrome. However, I argue that it is in fact necessary for normal, effective logical thought. For the competition of dissociated personality networks provides a natural incentive for the creation of self-sustaining belief systems -- which are the only type of belief systems capable of supporting creative deduction.
As well as supplying a new understanding of human personality, this idea also gives rise to a design for a new type of computer program: the A-IS, or "artificial intersubjectivity," consisting of a community of artificial intelligences collectively living in and creating their own "virtual" world. It is suggested that A-IS represents the next level of computational self-organization, after artificial intelligence and artificial life.