From Complexity to Creativity -- Copyright Plenum Press, © 1997

Back to " From Complexity to Creativity" Contents

Part I. The Complex Mind/Brain

CHAPTER 2. THE PSYNET MODEL


CHAPTER TWO

THE PSYNET MODEL

2.1 INTRODUCTION

    There is no question that the mind/brain is extremely complex, and therefore falls within the purvey of complexity science. The real question is whether complexity science, in its current state, is up to the challenge of modelling a system as complex as the mind/brain.

    I believe that the answer to this question is a guarded yes. In a series of publications over the past half-decade, I have constructed a novel complex systems model of mind, which I now call the psynet model (most notably The Structure of Intelligence (Goertzel, 1993; SI), The Evolving Mind (Goertzel, 1993a; EM) and Chaotic Logic (Goertzel, 1994); CL) . In this chapter I will review some of the most significant aspects of the psynet model, and discuss the place of the model within complexity science.

    The psynet model is a simple construction, but it is fundamentally different from previous complex system models. Before embarking on a detailed description, it may therefore be useful to enumerate the key principles of the model in a concise form, bearing in mind that many of the terms involved have not been explained yet:

    2.    Minds are magician systems residing on graphs

    2.    The magicians involved are pattern/process magicians

    3.    Thoughts, feelings and other mental entities are

        "structural conspiracies," i.e. autopoietic systems within the mind magician system

    4.    The structural conspiracies of the mind join together

        in a complex network of attractors, meta-attractors, etc.

    5.    This network of attractors approximates a fractal

        structure called the dual network, which is structured according to at least two principles: associativity and hierarchy.

Later chapters will apply the model to particular cases, and will explore the relations between the psynet model and other complex systems models. For now, however, the emphasis will be on giving a clear and general statement of the model itself, by explicating the psychological meaning of these abstract principles.

2.3 THE DUAL NETWORK

    Recall that a magician system consists, quite simply, of a collection of entities called "magicians" which, by acting on one another, have the power to cooperatively create new magicians. Certain magicians are paired with "antimagicians," magicianswhich have the power to annihilate them.

    According to the psynet model, mind is a pattern/process magician system. It is a magician system whose magicians are concerned mainly with recognizing and forming patterns.

    Such a system, as I have described it, may at first sound like an absolute, formless chaos. Just a bunch of magicians acting on each other, recognizing patterns in each other -- where's the structure? Where's the sense in it all?

    But, of course, this glib analysis ignores something essential -- the phenomenon of mutual intercreation, or autopoiesis. Systems of magicians can interproduce. For instance, a can produce a, while b produces a. Or a and b can combine to produce c, while b and c combine to produce a, and a and c combine to produce b. The number of possible systems of this sort is truly incomprehensible. But the point is that, if a system of magicians is mutually interproducing in this way, then it is likely to survive the continual flux of magician interaction dynamics. Even though each magician will quickly perish, it will just as quickly be re-created by its co-conspirators. Autopoiesis creates self-perpetuating order amidst flux.

    Some autopoietic systems of magicians might be unstable; they might fall apart as soon as some external magicians start to interfere with them. But others will be robust; they will survive in spite of external perturbations. These robust magician systems are what I call autopoietic systems, a term whose formal definition is given in the Appendix. This leads up to the next crucial idea of the psynet model: that thoughts, feelings and beliefs are autopoietic. They are stable systems of interproducing pattern/processes. In CL, autopoietic pattern/process magician systems are called structural conspiracies, a term which reflects the mutual, conspiratorial nature of autopoiesis, and also the basis of psychological autopoiesis in pattern (i.e. structure) recognition. A structural conspiracy is an autopoietic magician system whose component processes are pattern/processes.

    But structural conspiracy is not the end of the story. The really remarkable thing is that, in psychological systems, there seems to be a global order to these autopoietic subsystems. The central claim of the psynet model is that, in order to form a functional mind, these structures must spontaneously self-organize into larger autopoietic superstructures. And perhaps the most important such superstructure is a sort of "monster attractor" called the dual network.

    The dual network, as its name suggests, is a network of pattern/processes that is simultaneously structured in two ways. The first kind of structure is hierarchical. Simple structures build up to form more complex structures, which build up to form yet more complex structures, and so forth; and the more complex structures explicitly or implicitly govern the formation of their component structures. The second kind of structure is heterarchical: different structures connect to those other structures which are related to them by a sufficient number of pattern/processes. Psychologically speaking, as will be elaborated in the following section, the hierarchical network may be identified with command-structured perception/control, and theheterarchical network may be identified with associatively structured memory.

    While the dual network is, intuitively speaking, a fairly simple thing, to give a rigorous definition requires some complex constructions and arbitrary decisions. One approach among many is described in an Appendix to this chapter.

    A psynet, then, is a magician system which has evolved into a dual network structure. Or, to place the emphasis on structure rather than dynamics, it is a dual network whose component processes are magicians. The central idea of the psynet model is that the psynet is necessary and sufficient for mind. And this idea rests on the crucial assumption that the dual network is autopoietic for pattern/process magician dynamics.

Psychological Interpretation

    At first glance the dual network may seem an extremely abstract structure, unrelated to concrete psychological facts. But a bit of reflection reveals that the hierarchical and heterarchical networks are ubiquitous in theoretical psychology and cognitive neuroscience.

    For instance, the whole vast theory of visual perception is a study in hierarchy: in how line processing structures build up to yield shape processing structures which build up to yield scene processing structures, and so forth. The same is true of the study of motor control: a general idea of throwing a ball translates into specific plans of motion for different body parts, which translates into detailed commands for individual muscles. It seems quite clear that there is a perceptual/motor hierarchy in action in the human brain. And those researchers concerned with artificial intelligence and robotics have not found any other way to structure their perceiving and moving systems: they also use, by and large, perceptual-motor hierarchies. Perhaps the best example of this is idea of subsumption architecture in robotics, pioneered by Rodney Brooks at MIT (Brooks, 1989). In this approach, one begins by constructing low-level modules that can carry simple perceptual and motor tasks, and only then constructs modules residing on the next level up in the hierarchy, which loosely regulate the actions of the low-level modules. The perceptual-motor hierarchy is created from the bottom up. Recent researchers (Churchland et al, 1994) have pointed out the importance of top-down as well as bottom-up information transmission within the visual system, and the existence of connections at all levels to regions of the non-visual brain. But these observations do not detract from the fundamental hierarchical structure of perception and action; rather, they elaborate its place in the ecology of mind.

    On the other hand, the heterarchical structure is seen most vividly in the study of memory. The associativity of human long-term memory is well-demonstrated (Kohonen, 1988), and has been simulated by many different mathematical models. The various associative links between items stored in memory form a kind of sprawling network. The kinds of associations involved are extremely various, but what can be said in general is that, if two things are associated in the memory, then there is some other mental process which sees a pattern connecting them. This is theprinciple of the heterarchical, associative network.

    The key idea of the dual network is that the network of memory associations (heterarchical network) is also used for perception and control (hierarchical network). As a first approximation, one may say that perception involves primarily the passing of information up the hierarchy, action involves primarily the passing of information down the hierarchy, and memory access involves primarily exploiting the associative links, i.e. the heterarchical network. But this is only a first approximation, and in reality every process involves every aspect of the network.

    In order that an associative, heterarchical network can be so closely aligned with an hierarchical network, it is necessary that the associative network be structured into different levels clusters -- clusters of processes, clusters of clusters of processes, and so on. This is what I have, in EM, called the "fractal structure of mind." If one knew the statistics of the tree defining the hierarchical network, the fractal dimension of this cluster hierarchy could be accurately estimated (Barnsley, 1988). Alexander (1995) has argued for the neurobiological relevance of this type of fractal structure, and has constructed a number of interesting neural network simulations using this type of network geometry.

    Finally, it must be emphasized that neither the hierarchical network nor the heterarchical network is a static entity; both are constantly evolving within themselves, and the two are constantly coevolving together. One of the key points of the dual network model is that the structural alignment of these two networks implies the necessity for a dynamical alignment as well. In other words, whatever the heterarchical network does to keep itself well-adjusted must fit in nicely with what the hierarchical network does to keep itself adjusted (and obviously vice versa); otherwise the two networks would be constantly at odds. It stands to reason that the two networks might be occasionally at odds, but without at least a basic foundation for harmonious interaction between the two, a working dual network would never be able to evolve.

2.4 EVOLUTION AND AUTOPOIESIS IN THE DUAL NETWORK

    The dynamics of the dual network may be understood as a balance of two forces. There is the evolutionary force, which creates new forms, and moves things into new locations. And there is the autopoietic force, which retains things in their present form. If either one of the two forces is allowed to become overly dominant, the dual network will break down, and become excessively unstable, or excessively static and unresponsive.

    Of course, each of these two "forces" is just a different way of looking at the basic magician system dynamic. Autopoiesis is implicit in all attractors of magician dynamics, and evolutionary dynamics is a special case of magician dynamics, which involves long transients before convergence, and the possibility of complex strange attractors.

Memory Reorganization as Evolution

    Many theorists have expressed the opinion that, in some sense, ideas in the mind evolve by natural selection. Perhaps the most eloquent exposition of this idea was given by Gregory Bateson in his Mind and Nature (1980). The psynet model provides, for the first time, a rigorous analysis of the evolution/thought connection.

    The largest obstacle that must be overcome in order to apply evolution theory to the mind is the problem of the definition of fitness. Natural selection is, in Herbert Spencer's well-worn phrase, "survival of the fittest." When considering specific cases, biologists gauge fitnesses with their own common sense. If animal A runs faster than its predator, but animal B does not, then all else equal animal A is fitter -- no one needs a formal definition to tell them that. The problem is getting a handle on fitness in general. As the saying goes, if one cannot define fitness in any way besides reproductive success, then what one has is just survival of the survivors. And, more to the point, if one cannot define fitness in any way besides case-by-case special pleading, then what one has is a very inelegant theory that cannot be easily generalized to other contexts.

    One way around this problem, I have proposed, is to measure fitness in terms of emergent pattern. In EM, I define the structural fitness of an organism as the size of the set of patterns which synergetically emerge when the organism and its environment are considered jointly. If there are patterns arising through the combination of the organism with its environment, which are not patterns in the organism or the environment individually, then the structural fitness is large. Perhaps the easiest illustration is camouflage -- there the appearance of the organism resembles the appearance of the environment, generating the simplest possible kind of emergent pattern: repetition. But symbiosis is an even more convincing example. The functions of two symbiotic organisms match each other so effectively that it is easy to predict the nature of either one from the nature of the other.

    The claim is not that structural fitness is all there is to biological fitness; it is merely that structural fitness is an important component of biological fitness. Suppose one says that a system "evolves by natural selection" if, among the individuals who make it up, reproductive success is positively correlated with fitness. Then, if one accepts the claim that structural fitness is an important component of fitness, one way to show this is to show this is to show that reproductive success is positively correlated with structural fitness.

    Using this approach, it is easy to see that ecosystems and immune systems both evolve by natural selection (see EM). And, according to the principles outlined above, it is clear that psynets do as well. Consider: the "environment" of a process in the psynet is simply its neighbors in the network. So the structural fitness of a process in the psynet is the amount of pattern that emerges between itself and its neighbors. But, at any given time, the probability of a process not being moved in the network is positively correlated with its degree of "fit" in the associative memory. This degree of fit is precisely the thestructural fitness! So, survival in current position is correlated with structural fitness with respect to immediate environment; and thus, according to the definitions given, the psynet evolves by natural selection.

    According to this argument, the "individuals" which are surviving differentially based on fitness are, at the lowest level, individual magicians, individual mental processes. By the same logic, clusters of magicians may also be understood to evolve by natural selection. This observation leads up to a sense in which the psynet's evolutionary logic is different from that which one sees in ecosystems or immune systems. Namely, in the psynet, every time a process or cluster is moved in accordance with natural selection, certain processes on higher levels are being crossed over and/or mutated.

    In ecosystem evolution, the existence of "group selection" -- evolution of populations, species or higher taxa -- is a matter of contention. In psynet evolution, because of the presence of the hierarchical network, there is no cause for controversy. Higher-order individuals can explicitly represent and control groups, so that the distinction between groups and individuals is broken down. Group selection is a form of individual selection. In this sense, it would seem that the psynet uses natural selection much more efficiently than other evolving systems, such as ecosystems or immune systems. While ecosystems can, at best, carry out higher-order evolution on a very slow scale, psynets can carry out low-order and higher-order evolution almost simultaneously. This striking conclusion cries out for mathematical and computational investigation.

Evolution and Creativity

    We have been discussing evolution as a means for maintaining the structure of the associative memory network. However, evolution also has different psychological function. Namely, it comes up in regard to the creativity of mental process networks. This is where the computational experiments to be described Chapter Six are intuitively relevant. They show the ability of the genetic algorithm (a computational instantiation of evolution) to produce interesting new forms.

    The genetic algorithm consists of a population of entities, which repeatedly mutate and cross over with each other to produce new entities. Those entities which are "fitter" are selected for reproduction; thus the population as a whole tends to assume forms determined by the fitness criterion being used. Typical genetic algorithm experiments are aimed at finding the one correct answer to some mathematical problem. In most of the experiments in Chapter Six, however, the goal is to use the creative potential of whole populations, rather than merely using a population as a means to get to some "optimal" guess. This is precisely what, in the psynet model, is done by the mind's intrinsic "evolution." The complex forms created by evolving mental processes are vastly more complex than the simple pictures and melodies evolved in my experiments; on an abstract level, however, the principle is the same.

    The genetic algorithm, in a psychological context, must be understood as an approximation of the activity of subnetworks ofthe dual network. Subnetworks are constantly mutating as their component processes change. And they are constantly "crossing over," as individual component interactions change in such a way as to cause sub-subnetworks to shift their allegiance from one subnetwork to another. This dynamic has been discussed in detail in The Evolving Mind.

    And what is the relation between this genetic-algorithm-type creativity, in the hierarchical network, and the evolutionary reorganization of the heterarchical network, the associative memory? The answer is very simple: they are the same! When memory items move around from one place to another, seeking a "fitter" home, they are automatically reorganizing the hierarchical network -- causing subnetworks (mental "programs") to cross over and mutate. On the other hand, when processes switch their allegiance from one subnetwork to another, in a crossover-type process, their changing pattern of interaction consitutes a changing environment, which changes their fitness within the heterarchical network. Because the two networks are one, the two kinds of evolution are one. GA-style evolution and ecology are bound together very tightly, much more tightly than in the case of the evolution of species.

Autopoiesis and Thought

    But evolution is not the only kind of dynamics in the dual network. In order to achieve the full psynet model, one must envision the dual network, not simply as an hierarchy/heterarchy of mental processes, but also as an hierarchy/heterarchy of evolving autopoietic process systems, where each such systems is considered to consist of a "cluster" of associatively related ideas/processes. Each system may relate to each other system in one of three different ways: it may contain that other system, it may be contained in that other system, or it may coexist side-by-side with that other system. The dual network itself is the "grand-dad" of all these autopoietic systems.

    Autopoiesis is then seen to play an essential role in the dynamics of the dual network, in that it permits thoughts (beliefs, memories, feelings, etc.) to persist even when the original stimulus which elicited them is gone. Thus a collection of thoughts may survive in the dual network for two reasons:

    -- a usefulness relative to the hierarchical control structure, i.e. a usefulness for the current goals of the organism;

     -- autopoiesis

    As is shown in Chaotic Logic, this line of reasoning may be used to arrive at many specific conclusions regarding systems of thought, particularly belief systems. For purposes of illustration, two such conclusions may be worth mention here:

     -- that many belief systems considered "poor" or "irrational" have the property that they are sustained primarily by the latter method. On the other hand, many very useful and sensible belief systems are forced to sustain themselves by autopoiesis for certain periods of time as well. System theory clarifies but does not solve the problem of distinguishing "good" from "poor" belief systems.

     -- that one of the key roles of autopoietic systems in the dual network is to serve as a "psychological immune system," protecting the upper levels of the dual network from the numerous queries sent up from the lower levels.

    Stability means that a system is able to "absorb" the pressure put on it by lower levels, instead of constantly passing things along to the levels above it. Strong parallels exist between the dynamics of antibody classes in the immune system and the dynamics of beliefs in an autopoietic system, but we will not explore these parallels here.

    So, in the end, after thinking about the dual network as an emergent structure, one inevitably returns to the dynamic point of view. One sees that the whole dual network is just another autopoietic system which survives by the same two methods: structural conspiracy and external utility. Even if one begins with a fairly standard information-processing picture of mind, such as the master network, one eventually winds up with an "anything-goes" autopoietic-systems viewpoint, in which a successful mind, like a successful thought system, is one which perpetually and usefully creates itself.

2.5 LANGUAGE AND LOGIC

    Next, shifting gears somewhat, let us turn from evolution to language. Language is the focal point of much modern philosophy and cognitive science. It is commonly cited as a distinguishing feature of intelligent systems. What does the psynet model have to say about the linguistic mind?

    A language, as conceived in modern linguistics, is a transformation system. It is a collection of transformations, each equipped with its own set of rules regarding which sorts of entities it can be applied to in which situations. By applying these rules, one after the other after the other, to elements of the "deep structure" of thought, sentences are produced. In terms of the dual network, each of these transformation rules is a process with a certain position in the network; and sentences are the low-level result of a chain of information transmission beginning with a high-level structure or "idea."

    In the case of the language of mathematics, the transformation rules are very well understood; this is the achievement of the past 150 years of formal logic. In the case of natural languages, our understanding is not yet complete; but we do know a handful of general transformation rules (e.g. Chomsky's famous "move-alpha"), as well as dozens upon dozens of special-case rules.

    But this formal syntactic point of view is not enough. A set of transformation rules generates an incredible number of possible sentences, and in any given situation, only a miniscule fraction of these are appropriate. A system of transformation rules is only useful if it is amenable to reasoning by analogy -- if, given a reasonable set of constraints, the mind can -- by implementing analogical reasoning -- use the system to generate something satisfying those constraints. In other words, roughly speaking, a transformation system is only useful if structurallysimilar sentences have similar derivations. This "principle of continuous compositionality" is a generalization of Frege's (1893) famous principle of compositionality. It appears to hold true for natural languages, as well as for those branches of mathematics which we have studied to date.

    This has immediate implications for the theory of formal semantics. When one hears the phrase "the mathematics of meaning," one automatically thinks of the formal-semantic possible-worlds approach (Montague, 1974). But though it was brilliant in its time, it may well be that this approach has outlived its usefulness. The theory of computation suggests a more human and intuitive approach to meaning: the meaning of an entity is the fuzzy set of patterns that are "related" to it by other patterns. If one accepts this view of meaning, then the connection between syntax and semantics becomes very simple. A useful transformation system is one in which structurally similar sentences have similar derivations, and two sentences which are structurally similar will have similar meanings. So a useful transformation system is one in which sentences with similar meanings have similar derivations.

    It is not hard to see that continuous compositionality is exactly what is required to make a language naturally representable and usable by the dual network. The key point is that, by definition, statements with similar meanings are related by common patterns, and should thus be stored near one another in the memory network. So if a transformation system is "useful" in the sense of displaying continuous compositionality, it follows that statements stored near each other in the memory network will tend to have similar derivations. This means that the same "derivation process," using the same collection of strategies, can be used for deriving a whole group of nearby processes within the network. In other words, it means that useful transformation systems are tailor-made for the superposition of an associative memory network with an hierarchical control network containing "proof processes." So, continuous compositionality makes languages naturally representable and learnable in the dual network. It is what distinguishes natural languages from arbitrary formal languages.     As briefly argued in Chaotic Logic, this analysis has deep implications for the study of language learning. Language acquisition researchers are conveniently divided into two camps: those who believe that inborn knowledge is essential to the process of language learning, and those who believe that children learn language "on their own," without significant hereditary input. The psynet model does not resolve this dispute, but it does give a relevant new perspective on the processes of language learning.

    In the language acquisition literature there is much talk of the "bootstrapping problem," which essentially consists of the fact that the different aspects of language are all interconnected, so that one cannot learn any particular part until one has learned of all the other parts. For instance, one cannot learn the rules of sentence structure until one has learned the parts of speech; but how does one learn the parts of speech, except by studying the positions of words in sentences? From the psynet perspective the bootstrapping problem is noproblem whatsoever; it is simply a recognition of the autopoietic nature of linguistic systems. Language acquisition is yet another example of convergence to a structural conspiracy, an autopoietic system, a strange attractor of pattern/process magician dynamics.

    The question of innateness is thus reformulated as a question of the size of the basin of the language attractor. If the basin is large enough then no innate information is needed. If the basin is too small then innate information may be needed in order to be certain that the child's learning systems start off in the right place. We do not presently know enough about language learning to estimate the basin size and shape; this exercise has not even been carried out for formal languages, let alone natural languages.

Overcoming the "Paradoxes" of Logic

    It is interesting to apply this analysis of language to the very simple language known as Boolean logic. When Leibniz invented what is now called "Boolean logic" -- the logic of and, or and not -- he intended it to be a sort of language of thought. Mill, Russell, and many recent thinkers in the field of artificial intelligence have pursued the same intuition that much thought is just the solution of Boolean equations. But many problems stand in the way of this initially attractive idea.

    For example, there are the "paradoxes of implication." According to Boolean logic, "A implies B" just means "either B is false, or A is true." But this has two unsavory consequences: a false statement implies everything, and a true statement is implied by everything. This does not accord very well with our intuitive idea of implication.

    And there is Hempel's paradox of confirmation. According to Boolean logic, "All ravens are black" is equivalent to "All non-black entities are non-ravens." But then every piece of evidence in favor of the statement "All non-black entities are non-ravens" is also a piece of evidence in favor of the statement "All ravens are black." But this means that when we observe a white goose, we are obtaining a piece of evidence in support of the idea that all ravens are black -- which is ridiculous!

    All of these paradoxes are easily avoided if, rather than just hypothesizing that the mind uses Boolean logic, one hypothesizes that the mind uses Boolean logic within the context of the dual network. As an example, let us consider one of the paradoxes of implication: how is it that a false statement implies everything? Suppose one is convinced of the truth both of A and of the negation of A, call it not-A. How can one prove an arbitrary statement B? It's simple. The truth of A implies that either A is true, or B is true. But the truth of not-A then implies the truth of both not-A, and either A or B. But on the other hand, if not-A is true, and either A or B is true, then certainly B must be true.

    To put it less symbolically, suppose I love mom and I hate mom. Then surely either I love mom or cats can fly -- after all, I love mom. But I hate mom, so if either I love mom or cats can fly, then obviously cats can fly.

    So what, exactly, is the problem here? This paradox datesback to the Scholastic philosophers, and it hasn't obstructed the development of mathematical logic in the slightest degree. But from the point of view of psychology, the situation is absurd and unacceptable. Of course a person can both love and hate their mother without reasoning that cats can fly.

    The trick to avoiding the paradox is to recognize that the psynet is primary, and that logic is only a tool. The key step in the deduction of B from "A and not-A" is the formation of the phrase "A or B." The dual network, using the linguistic system of Boolean logic in the manner outlined above, simply will not tend to form "A or B" unless A and B are related by some pattern. No one ever thinks "either I love mom or cars can fly," any more than they think "either I love mom or planes can fly." So the dual network, using Boolean logic in its natural way, will have a strong tendency not to follow chains of reasoning like those required to reason from a contradiction to an arbitrary statement.

    But what if some process within the dual network, on an off chance, does reason that way? Then what? Will this contradiction-sensitivity poison the entire dual network, paralyze its reasoning functions? No. For a process that judges every statement valid will be very poor at recognizing patterns. It will have no clue what patterns to look for. Therefore, according to the natural dynamics of the multilevel network, it will rapidly be eliminated. This is natural selection at work!

    This is a very partial view of the position of logic in the dual network -- to complete the picture we would have to consider the other paradoxes mentioned above, as well as certain other matters. But the basic idea should be clear. The paradoxes of Boolean logic are fatal only to Boolean logic as an isolated reasoning tool, not to Boolean logic as a device implemented in the context of the psynet. In proper context, the species of linguistic system called logic is of immense psychological value.

2.6 PSYNET AI

    The psynet model was originally conceived as a kind of "abstract AI" -- an AI theory based, not on the speed and memory limitations of current computers, nor on the mathematical tools of formal logic, but on the introspectively and experimentally observed structures of mind itself. Subsequent development of the model has taken a more philosophical and psychological turn, but the model is still computational at the core. Given this background, it is hardly surprising that the psynet model should have significant implications for AI.

    In fact, the model's AI implications are more radical than might be perceived at first glance. While the typical pattern in AI is for cognitive theory to be driven by computational experiment, the psynet model represents an attempt to move in exactly the opposite direction, from theory to experiment: to begin with a general, well fleshed-out model of computational psychology, and then formulate computational experiments based on this model. The SEE model briefly discussed above is a simple example of this approach.

    The psynet approach to AI might be called a "scaling-down"approach, as opposed to the standard "scaling-up" approach which assumes that one can construct simple computational models and then scale up to obtain a complex, intelligent system. To fully appreciate the radical nature of the scaling-down approach, a little historical background is needed.

    Early AI researchers, as has been amply documented (Dreyfus, 1993), vastly overestimated the ease of generalization. They produced programs which displayed intelligence in very limited domains -- e.g. programs which were good at playing chess, or recognizing letters, or doing calculus problems, or moving blocks around in a room. In this way, they believed, they were constructing intelligent algorithms, which would then, with a little tinkering, be able to turn their intelligence to other problems. This is not an unreasonable notion; after all, teaching a person chess or calculus improves their general powers of thought; why shouldn't the same be true of a computer? But in fact these classic AI programs were idiot savants. The programs worked because they embodied rules for dealing with specific situations, but they never achieved the ability to come into a new situation and infer the appropriate rules. The assumption was that reasoning ability would scale up from micro-worlds to the real macro-world in which we live, or at least to artificially constructed macro-worlds containing numerous intersecting sub-environments. But this assumption proved disastrously wrong.

    Over the last ten to fifteen years, the connectionist paradigm has breathed new life into artificial intelligence. Even more recently, the theory of genetic algorithms has begun to direct AI research down a new path: the achievement of intelligence by simulating evolutionary processes rather than brain processes or reasoning processes. But it is not hard to see that, exciting as they are, these new ideas fall prey to the same basic fallacy as old-style AI. The programs are designed to perform well on toy problems; it is then assumed that whatever works on the toy problems will "scale up" to deal with the real problems confronted by actual intelligent systems. But this assumption of scale-uppability contradicts the available evidence. Anyone who has worked with neural nets or genetic algorithms knows that, to get any practical use out of these constructs, a great deal of intelligent human planning and intervention is needed. One must first figure out how to best represent the data in order to present it to the program. There is a huge gap between generalization ability in simple, appropriately represented domains and generalization ability across a variety of complex, un-preprocessed domains. The former does not "scale up" to yield the latter.     

    Classical AI depends on scaling up from rule-based learning in a "microworld" to rule-based learning in the macroworld. Connectionist and GA-based AI depend on scaling up from localized generalization to reality-wide generalization. But according to the psynet model, the most important aspect of an intelligent system is precisely that aspect which cannot be inferred by the "scaling-up" method: the overall structure. To get psychologically meaningful AI applications, one must scale down from the appropriate overall structure, instead of blindly counting on scaling-up.

Developing Psynet Applications

    What sorts of questions are involved in actually developing AI applications based on the the psynet model? There are two important decisions to be made: the degree of static structure to be built in, and the nature of the component pattern/ processes.

    The first question returns us to the dynamic and static views of the psynet. The static approach to psynet AI begins with a dual network data structure and populates this structure with appropriate pattern/processes. Each node of the network is provided with a "governor" processor that determines the necessity for mutation and swap operations, based on the success of the processes residing at that node. If the processes have done what was expected of them by the higher-level processes which guide them, then little mutation of subprocesses, and little swapping of subprocess graphs, will be required. But if the processes have left a lot of higher-level expectations unfulfilled (i.e., according to the ideas given above, if they have generated a large amount of emotion), then mutation and swapping will be rampant.

    The dynamic approach, on the other hand, begins with pattern/processes interconnected in an arbitrary way, and, instead of imposing a dual network structure on these processes, relies on autopoietic attraction to allow the dual network to emerge. The distinction between these two approaches is not a rigid one; one may begin with more or less "dual network like" graphs. At this point, there is no way of determining the point on this continuum which will lead to the most interesting results.

    The second decision, how to implement the pattern/processes, leads to an even greater variety of possibilities. Perhaps the simplest viable options are bit string processes (a la the GA), Boolean processes, and repetition processes, processes which recognize repeated patterns in external input and in one another. The latter, one suspects, might be useful in text processing applications.

    These simple pattern/process options all lead to architectures that are fairly regimented, in the sense that all the processes have the same basic form. However, this kind of regimentation is not in any way implicit in the psynet model itself. The only substantial restrictions imposed by the model are that: 1) processes must be able to recognize a wide variety of patterns, and 2) processes must be able to act on each other with few limitations (i.e. they must form a "typeless domain," or a good approximation thereof).

    In the long term, it will certainly be interesting to construct psynets involving a similar variety of pattern/processes. But we may also be able to do a great deal with regimented architectures; at present this is difficult to predict. Interestingly, even the degree of regimentation of the human brain is a matter of debate. On the one hand, the brain contains all sorts of specialized pattern-recognition processes; the processes relating to visual perception have been studied in particular detail. On the other hand, as mentioned above, Edelman (1988) has argued that these complex processes are allbuilt as different combinations of a fairly small number of repeated, functionally equivalent neuronal groups.

Psynets and the Darwin Machine Project

    One possible route toward psynet AI is the Simple Evolving Ecology (SEE) model, which will be discussed in a later chapter, once more background on the genetic algorithm has been provided. Another possibility involves the "Darwin Machine" project, currently being carried out by the Evolutionary Systems Department at ATR Human Information Processing Research Laboratories in Japan, under the supervision of Katsunori Shimohara. Shimohara (1994) describes a three-phase research programme, the ultimate result of which is intended to be the construction of an artificial brain. The philosophy of this programme is not to simulate the precise workings of the brain, but rather to use methods from evolutionary computation to construct a device that works better than the brain. The first phase is an exploration of techniques for evolution of software and hardware. The second phase is a construction of a "Darwin Chip" and "Darwin Machine" which will incorporate these techniques, thus moving evolutionary learning from the software level to the hardware level. The third phase, still almost entirely speculative, is a re-implementation of the Darwin Machine using nanotechnology. This third phase, it is felt, will produce a Darwin Machine of sufficient complexity to support true intelligence, an "Artificial Brain System."

    The Darwin Machine project would seem to be an excellent testing ground for the psynet model. If the Psynet Conjecture is correct, then the construction of an artificial brain cannot succeed unless the psynet structure is adhered to. On the other hand, the evolutionary methodology which Shimohara advocates is ideally suited for the psynet model -- a fact which should be fairly clear from the brief discussion of evolution in the psynet given above, and even clearer from the more detailed discussion of evolutionary computation in the dual network given in EM.

    Perhaps the most interesting immediate goal of Shimohara's group is the CAM-Brain Project (a CAM, or Cellular Automata Machine, is a special kind of hardware designed for parallel simulation of cellular automata):

        The aim of the CAM-Brain Project is to build (i.e. grow/evolve) an artificial brain by the year 2002. This artificial brain should initially contain thousands of interconnected artificial neural network modules, and be capable of controlling approximately 1000 "behaviors" in a "robot kitten." Using a family of CAM's, each with its own processor to measure the performance quality or fitness of the evolved neural circuits, will allow the neural modules and their interconnections to be grown and evolved at electronic speeds.

The psynet model makes a very specific suggestion about this project. Namely, it suggests that the project will succeed in producing a reasonably intelligent kitten if and only if:

     -- many of the neural network modules are used as pattern-recognition processes (pattern/processes)

     -- the network of modules is arranged in a dual network structure

    Because of the relative simplicity of a robot kitten, as opposed to, say, a human brain, one cannot call these suggestions "predictions." The absence of a psynet structure in a robot kitten would not constitute a disproof of the psynet model. But on the other hand, if it were empirically demonstrated that a psynet is necessary for intelligence in a robot kitten, this would certainly constitute a strong piece of evidence in favor of the psynet model.

    In fact, depending on the level of intelligence of this kitten, many of the more specific phenomena discussed above can be expected to show up. For instance, the psynet resolution of the paradoxes of logic should be apparent when the kitten learns causative relationships. A very simple version of continuous compositionality may be observable in the way the kitten responds to combinations of stimuli in its environment. The evolution by natural selection of subnetworks should be obvious every time the kitten confronts new phenomena in its environment. The autopoiesis of belief systems should be apparent as the kitten retains a system for reacting to a certain situation even once the situation has long since disappeared. Even the emergence of dissociated self-systems, as will be discussed in later chapters, could probably be induced by presenting the kitten with fundamentally different environments, say, on different days of the week.

2.7 CONCLUSION

    The complexity of the mind, I have argued, does not prohibit us from obtaining a unified understanding. Rather than interpreting the mind's complexity as an obstacle, one may take it as a challenge to the imagination, and as an invitation to utilize ideas from complex systems science. One may seek to provide a theory of sufficient simplicity, flexibility and content to meet the complexity of psychological systems head on.

    The psynet model is not at bottom a "synthesis"; it is a simple entity with its own basic conceptual coherence. However, because of its focus on interdependences, the model is particularly amenable to study using the various overlapping tools of complex systems science. In particular it leads one to think of psychological phenomena in terms of:

     -- evolution by natural selection as a general mechanism of form creation

     -- universal computation, as a foundation for the study of information and pattern

     -- dynamical systems theory -- the concept of attractors and convergence thereto

     -- autopoiesis, or self-organization combined with self-production

     -- agent-based modeling, as a way of bridging the gap between connectionist and rule-based modeling

    Summing up, we may ask: What is the status of the psynet model, as a model of mind? To make a fair assessment, one must consider the model in three different lights: as mathematics, as theoretical neuroscience, and as psychology.

    Mathematically, one can argue convincingly that the dual network is indeed an attractor of the cognitive equation, of pattern/process magician dynamics. The proof is inductive: one shows that if a given set S of processes are all attractors of the cognitive equation, and one arranges the elements of S in a flat heterarchical network M, with process systems recognizing emergent patterns between elements of S in a flat heterarchical network N supervising over M, then the combined network of M and N is still an attractor of the cognitive equation. According to this theorem, if one undertakes to model the mind as a magician system (or, more generally, agent system), then the dual network is one configuration that the mind might get itself into. The possibility of other, non-dual-network attractors has not been mathematically ruled out; this is an important open question.

    In the next chapter I will deal with the applications of the psynet model to neuroscience. The psynet-brain connection has been mentioned repeatedly in previous publications, but has never before been systematically pursued. It turns out that the psynet model matches up quite naturally with what is known about the structure of the cortex, and provides a handy platform for exploring various questions in cognitive neuroscience.

    Finally, in the area of psychology, a great deal of work has been done attempting to demonstrate the ability of the psynet model to account for human mentality. In SI, induction, deduction, analogy and associative memory are analyzed in detail as phenomena of pattern/process dynamics. In EM, the parallels between Neural Darwinism, evolutionary ecology, and the psynet model are illustrated, the point being to demonstrate that the psynet model is capable of accounting for the evolutionary and creative nature of human thinking. In CL, a psynet-theoretic account of language is given, and personal and scientific belief systems are analyzed in terms of autopoietic attraction in mental process systems. Of course, all the phenomena touched on in this work may be explained in many different ways. The point is that the psynet model gives a simple, unified explanation for a wide variety of psychological phenomena, in terms of complexity-science notions such as algorithmic pattern, attractors, agent systems and adaptive evolution.

    The task of developing and validating such a complex model is bound to be challenging. But this difficulty must be weighed against the immense potential utility of a unified theory of biological and computational intelligence. The explorations summarized here and in the companion paper indicate that, at very least, the psynet model shows promise in this regard.

APPENDIX 1: THE PSYNET MODEL AS MATHEMATICS

    The task of this Appendix is to formalize the statement that the psynet model is an accurate model of mind, thus turning the psynet model into a purely mathematical hypothesis. The firststep toward this end is to define what I mean by "mind." Having done this, it is not difficult to give various rigorous formulations of the statement that the psynet models mind.

    In SI a simple "working definition" of mind is given: a mind is the structure of an intelligent system (i.e. the fuzzy set of patterns in an intelligent system). One may also give more sophisticated definitions, e.g. by weighting the degree of membership of each pattern in the mind according to some measure of its relevance to intelligent behavior. Intelligence is then defined as "the ability to achieve complex goals in difficult-to-predict environments." A mind is, therefore, the structure of a system that can achieve complex goals in unpredictable environments. These definitions obviously do not solve the numerous philosophical and psychological problems associated with mind and intelligence. But, in the spirit of all mathematical definitions, they do give us something to go on.

    The terms in this definition of intelligence may all be defined precisely. For instance, put simply, an environment is said to be difficult to predict if it couples unpredictability regarding precise details (e.g. high Liapunov exponent) with relative predictability regarding algorithmic patterns (i.e. a moderate correlation between patterns inferrable prior to time t and patterns inferrable after time t; and between patterns inferrable at one spatial location and patterns inferrable at another spatial location). Similarly, a goal, formalized as a function from environment states to some partially ordered space representing outcome qualities, is said to be complex if it couples unpredictability regarding precise details (e.g. high Lipschitz constant) with relative predictability regarding algorithmic patterns (i.e. moderate correlations between patterns inferrable from one part of the functions's graph and patterns inferrable from another).

    Given this definition of intelligence, one may give the following formalization of the statement that the psynet models mind (this is a rephrasing of an hypothesis given in the final chapter of SI):    

     -- Psynet Hypothesis: A system displays general intelligence if and only if it displays the psynet as a prominent algorithmic pattern (where intelligence and pattern are defined according to the same model of computation).

    In other words, what this says is that a system is intelligent only if it has the psynet as a substantial part of its mind. A proof (or disproof!) of this conjecture has proved difficult to come by. However, it is possible to unravel the conjecture into simpler conjectures in a way that provides at least a small amount of insight. For instance, suppose one narrows the focus somewhat, and instead of general systems considers only appropriate magician systems. Then the Psynet Hypothesis suggests the following, somewhat more approachable, hypothesis:

     -- Probable Producibility Hypothesis: a large dual network is a wide-basined attractor of the cognitive equation.

    If one removes the word "wide-basined" from this conjecture, one obtains the statement that a large dual network is an autopoietic system under the cognitive equation; a slightly weaker conjecture which in CL is called the ProducibilityHypothesis. The conjecture formulated here claims not only that the dual network stably produces itself, but also that, if one starts a magician system from an arbitrary initial condition, it is reasonably likely to self-organize into a dual network.

    These hypotheses give rise to a variety of possibilities. If the Producibility Hypothesis were to be proved false, then the psynet model would have to be abandoned as fundamentally unsound. On the other hand, if the Producibility Hypothesis were to be proved true, then the problem of validating the psynet model as a model of human and non-human intelligence would still remain. But at least the internal consistency of the model would not be in question. The psynet would be a demonstrably viable cognitive structure.

    If the Probable Producibility Hypothesis were proved false, while the weaker Producibility Hypothesis were proved true, this would validate the dual network as a viable cognitive structure, but would raise a serious problem regarding the evolution of the dual network. One would have to assume that the evolution of the dual network had begun from a very special initial condition; and, in working out AI applications, one would have to be certain to choose one's initial condition carefully.

    Finally, if the Probable Producibility Hypothesis were proved true, then this would validate the dual network as a viable cognitive model, and would also verify the ease of arriving at a dual network structure from an arbitrary adaptive magician system.

    These statements give a rigorous way of approaching the claim that the psynet model is a valid psychological model. However, they do not address the question of other cognitive models. This question is dealt with by the following

     -- Exclusivity Hypothesis: There are no abstract structures besides the dual network for which the Probable Producibility Hypothesis is true.

    Howver, a proof of this appears at least as difficult as a proof of the Psynet Hypothesis.

APPENDIX 2: FORMAL DEFINITION OF THE DUAL NETWORK

    While the concept of the dual network is intuitively quite simple, it supports no simple formalization. There are many different ways to formalize the same basic idea. What follows is one approach that seems particularly reasonable.

    Define a geometric magician system, or GMS, as a graph labeled with a collection of magicians at each node. The concept of a dual network may then be formalized in terms of the notion of a fuzzy subset of the space of geometric magician systems. Let k be a small integer, to be used for gauging proximity in a GMS.

    Where G is a GMS, define the heterarchicality of G, het G, as the average over all nodes N of G of the quantity v/|G|, where v is the amount of emergent pattern recognized by the k-neighbors of the residents of N, between the residents of node N and the residents of k-neighboring nodes.

    This gauges the degree to which G is "associative" in thesense of instantiating patterns in its structure.

    Next, define a stratified geometric magician system or SGMS as a GMS in which each node has been assigned a certain integer "level." Where H is an SGMS, define the hierarchality of H, hier H, as the product wx/|H|2, where

     -- w is the total weight of those magician interactions appearing in F[R[H]] which involve a magician on level i acting on a magician of level i-1 to produce a magician on level i-2.

     -- x is the total amount of emergent pattern recognized by magicians on some level i amongst magicians on level i-2.

    The quantity w gauges the degree to which H instantiates the principle of hierchical control, the quantity x gauges the degree to which H demonstrates the hierarchical emergence of increasingly complex structures, and the quantity hier H thus measures the degree to which H is a valid "hierarchical network."

    The degree to which a geometric magician system G is a dual network may then be defined as the product of the hierarchality and heterarchality of G.

    All this formalism speaks of GMS's. To transfer it to ordinary magician systems, we must define a GMS G to be consistent with a magician system M if the magician population of G is the same as the magician population of M, and the interactions permitted by the magician dynamic of M do not include any interactions that woudl be forbidden in G. The degree to which a magician system M is a dual network is the maximum, over all graphs with which M is consistent, of the GMS obtained by associating M with that graph.

    What this exceedingly unsightly definition says is, quite simply, that the degree to which a magician system is a dual network is the degree to which this system may be understood as a combination of hierarchical and heterarchical geometric magician systems. The difficulty of expressing this idea mathematically is an indication of the unsuitability of our current mathematical language for expressing psychologically natural ideas.