Wild Computing -- copyright Ben Goertzel, © 1999

Back to Wild Computing Table of Contents

Chapter 2:
Mind as Network and Emergence

1. Introduction

Mind consists of emergent pattern -- but emergent pattern is necessarily emergent from something. At present, the intelligence we know in the greatest detail is our own, which emerges from the human brain. The emergence of intelligence from the Internet will not mimic the emergence of intelligence from the brain in every detail, but there will be many commonalities. In this chapter I will discuss the brain as a self-organizing network, and the mind as a self-organizing network, and how the latter network emerges from the former. This serves us two purposes in our study of Internet intelligence: it gives us a coherent model of emergent intelligence, to be applied to the Net; and it gives us an example of emergent intelligence coming out of a particular physical system.

2. Neural Networks

The premier computer models of the brain are "neural network" models, which bear many interesting similarities to modern computer networks -- they are asychronous systems carrying out massively distributed processing -- and so are highly germane to the topics of this book. Mind as emergent from neural networks is different from mind as emergent from computer networks, but there are more similarities between the two than one would expect to find, say, between mind as emergent from neural networks and mind as emergent from an intelligent ocean or an intelligent cloud of gas. Network-ness carries numerous specialized properties that are important for the emergence of mind.

The "neural" in "neural network" refers to the nerve cell or neuron -- a type of cell which exists throughout the body for the purpose of registering sensations, but exists in greatest bulk within the brain. The purpose of most neurons in the brain is not to directly register sensations from the outside world, but to register sensations from other neurons in the brain. I.e., the brain is a huge sense organ whose goal is to sense itself via its neurons. Neurons are not the only brain cells; in fact, they are greatly outnumbered by glia. However, many neuroscientists (Edelman, 1987; Rose and Dobson, 1985) believe that the key to mental process lies in the large-scale behavior of networks of neurons; and I do not disagree.

A neuron consists of a cell body with a long, narrow axon emerging from one end, and a large number of branches called dendrites snaking out in all directions. The dendrites are inputs _ they receive electrical signals from other neurons. The cell body periodically generates _ "fires" _ a new electrical impulse based on these input signals. After it fires, it needs to "recover" for a while before it can fire again; and during this period of recovery it basically ignores its input.

The axon carries the electrical impulse from the cell body to the dendrites and cell bodies of other neurons. The points at which signals pass from one neuron to another are called synapses, and they come in two different forms _ excitatory and inhibitory. When an impulse arrives through an excitatory synapse, it encourages the receiving neuronto fire. When an impulse arrives through an inhibitory synapse, it discourages the receiving neuron from firing.

Each synapse has a certain conductance or "weight" which affects the intensity of the signals passing through it. For example, suppose excitatory synapse A has a larger "weight" than excitatory synapse B, and the same signal passes through both synapses. The signal will be more intense at the end of A than at the end of B.

Roughly speaking, a recovered neuron fires if, within the recent past, it has received enough excitatory input and not too much inhibitory input. The amount of the past which is relevant to the decision whether or not to fire is called the period of latent addition. How much excitation is "enough," and how much inhibition is "too much," depends upon the threshold of the neuron. If the threshold is minimal, the neuron will always fire when its recovery period is over. If the threshold is very high, the neuron will only fire when nearly all of its excitatory synapses and virtually none of its inhibitory synapses are active.

Mathematically, the rule that tells a neuron when to fire can be modeled roughly as follows. Let wi be the weight of the i-th synapse which inputs to the neuron, where positive weights denote excitatory connections and negative weights denote inhibitory connections. Let xi(t) denote the signal coming into the neuron through the i-th synapse, at time t, where the time variable is assumed to be discrete. Let P be the period of latent addition, and let R be the recovery period. Then the total relevant input to the neuron at time t is the sum, over all s so that

max{t-P,r+R} < s < t, of wi(s)xi(s), where r is the time at which the neuron last fired. Where T is the threshold of the neuron, a neuron fires at time t if its total relevant input exceeds T. When a neuron fires, its output is 1; when it does not fire, its output is 0.

A "neural network," then, is a network of interconnected neurons firing according to this rule. This is a greatly oversimplified model of the brain: it gives no role to other brain cells such as glia, and it completely ignores all the chemistry that mediates actual neural interaction. In the brain the passage of an electrical signal from one neuron to another is not exactly analogous to the passage of electricity across a wire. This is because most neurons that are "connected" do not actually touch. What usually happens when a signal passes from neuron A to neuron B is that the dendrites of neuron A build up a charge which causes certain chemicals called neuro-transmitters to carry that charge to the dendrites of neuron B. The neural network model ignores all the subtleties of this process.

So, to consider the brain as a neural network is an oversimplification. The "neural networks" which are now so popular in computer science and electrical engineering (Garrido,1990; Kawato et al., 1987; Goldberg et al., 1988; Hopfield and Tank, 1985) are usually simplified even further. It is generally assumed that the period of latent addition is 1 time step, and the recovery period is 0 time steps.

Such simplified "neural networks" have proven themselves effective at a number of difficult practical problems _ combinatorial optimization, associative memory, (Hopfield and Tank, 1980), pattern recognition (Grossberg, 1987) and robotic control (Goldberg et al., 1988), to name a few. Mathematically, they are similar to the physicists' spin glasses (Garrido, 1990). Everyone realizes that these networks are mediocre brain models, but the connection with neuroscience is tantalizing nonetheless.

For example, the well known Hopfield network (Hopfield, 1980) uses first-order neurons to minimize functions. To explain the idea behind this network, let us define a state of a network of n neurons as a binary sequence a1a2...an. A state is periodic with period p if whenever, xi(t)=ai for i=1,...,n, xi(t+p)=ai for i=1,...,n. If a state is periodic with period 1, then it is an equilibrium.

Given a certain function f from binary sequences to real numbers, Hopfield's approach is to define a network whose equilibrium states are local minima of f, and which has no periodic points besides its equilibria. Then one may set the state of the network at time zero equal to any random binary sequence, and eventually the values xi(t) will settle into one of the equilibria. Of course, if a function has many local minima, the corresponding network will have many equilibria.

Formal neural networks like the Hopfield net, and feedforward backpropagation nets, are useful for computer and behavioral scientists, but unsatisfying to biological theorists. In recent years, several inventive biologists have sought to bridge thelarge gap between formal neural networks and actual brains. In my opinion, the most impressive of these efforts is Edelman's (1987) theory of neuronal group selection, or "Neural Darwinism."

The starting point of Neural Darwinism is the observation that neuronal dynamics may be analyzed in terms of the behavior of neuronal groups. The strongest evidence in favor of this conjecture is physiological: many of the neurons of the neocortex are organized in clusters, each one containing say 10,000 to 50,000 neurons each.

Once one has committed oneself to looking at groups, the next step is to ask how these groups are organized. A map, in Edelman's terminology, is a connected set of groups with the property that when one of the inter-group connections in the map is active, others will often tend to be active as well. Maps are not fixed over the life of an organism. They may be formed and destroyed in a very simple way: the connection between two neuronal groups may be "strengthened" by increasing the weights of the neurons connecting the one group with the other, and "weakened" by decreasing the weights of the neurons connecting the two groups.

Formally, we may consider the set of neural groups as the vertices of a graph, and draw an edge between two vertices whenever a significant proportion of the neurons of the two corresponding groups directly interact. Then a map is a connected subgraph of this graph, and the maps A and B are connected if there is an edge between some element of A and some element of B. (If for "map" one reads "program," and for "neural group" one reads "subroutine," then we have a process dependency graph as drawn in theoretical computer science.)

This is the set-up, the context in which Edelman's theory works. The meat of the theory is the following hypothesis: the large-scale dynamics of the brain is dominated by the natural selection of maps. Those maps which are active when good results are obtained are strengthened, those maps which are active when bad results are obtained are weakened. And maps are continually mutated by the natural chaos of neural dynamics, thus providing new fodder for the selection process. By use of computer simulations, Edelman and his colleague Reeke have shown that formal neural networks obeying this rule can carry out fairly complicated acts of perception.

3. The Psynet Model of Mind

Supposing that we accept, provisionally, the neural network as a model of the brain. We may then ask: how does the mind emerge from the brain? In fact, even if neural networks are fundamentally flawed as brain models, they may still be useful in inspiring us to address the question of emergence in interesting ways. The main point is -- the mind is a network of one kind, and the physical substrate underlying the mind, i.e. the brain, is a network of another kind. The emergence of mind from reality is the emergence of a network from a network.

In a series of publications over the past half-decade (Goertzel, 1993, 1993a, 1994, 1997), I have constructed a novel complex systems model of mind, which I now call the psynet model, aimed at answering precisely this question. The psynet model envisions the mind as an agent system, involving special kinds of agents called "magicians", which are distinguished from "agents" as generally discussed in computer science (and discussed in the following chapter) by the fact that their main activity is transforming other magicians. Of course, all agents that interact with other agents can be thought of as magicians; the distinction between magicians and run-of-the-mill agents is fuzzy, just as is the distinction between, say, intelligent agents and run-of-the-mill agents. The question, in determining whether a system of agents is a "magician system," is how transformative the interactions are.

So, a magician system consists, quite simply, of a collection of entities called "magicians" which, by acting on one another, have the power to cooperatively create new magicians. Certain magicians are paired with "antimagicians," magicians which have the power to annihilate them. According to the psynet model, mind is a pattern/process magician system. It is a magician system whose magicians are concerned mainly with recognizing and forming patterns.

Crucial to this picture is the phenomenon of mutual intercreation, or autopoiesis. Systems of magicians can interproduce. For instance, a can produce a, while b produces a. Or a and b can combine to produce c, while b and c combine to produce a, and a and c combine to produce b. The number of possible systems of this sort is truly incomprehensible. But the point is that, if a system of magicians is mutually interproducing in this way, then it is likely to survive the continual flux of magician interaction dynamics. Even though each magician will quickly perish, it will just as quickly be re-created by its co-conspirators. Autopoiesis creates self-perpetuating order amidst flux.

The magician system view of mind is merely one small step beyond Neural Darwinism. It says: let us take as the fundamental entities, the states of neuronal modules. Let us envision these states, patterns localized in space and time, as interacting with one another, transforming each other into different states. Each state lasts for a brief period of time, giving rise to other states, and may then arise again as a result of this activity. The biologist does not wish to think in terms of states, as he prefers to deal with tangible, persistent structures, but it is states that make up the experienced mind, and it is the dynamics of states that we must come to grips with if we wish to understand the mind -- and the parallels between the human mind and the Internet. The Net does not have neuronal modules, in any exact sense, but yet the self-organizing system of states that emerges from the Net is structurally and dynamically very similar to the system of states emerging from the human brain. There is a parallel on the level of magician systems.

Some systems of magicians might be unstable; they might fall apart as soon as some external magicians start to interfere with them. But others will be robust; they will survive in spite of external perturbations. These robust magician systems are what I call autopoietic systems. The psynet model posits that thoughts, feelings and beliefs are autopoietic. They are stable systems of interproducing pattern/processes. In Chaotic Logic, autopoietic pattern/process magician systems are called structural conspiracies, a term which reflects the mutual, conspiratorial nature of autopoiesis, and also the basis of psychological autopoiesis in pattern (i.e. structure) recognition. A structural conspiracy is an autopoietic magician system whose component processes are pattern/processes.

But structural conspiracy is not the end of the story. The really remarkable thing is that, in psychological systems, there seems to be a global order to these autopoietic subsystems. The central claim of the psynet model is that, in order to form a functional mind, these structures must spontaneously self-organize into larger autopoietic superstructures. And perhaps the most important such superstructure is a sort of "monster attractor" called the dual network.

The dual network, as its name suggests, is a network of pattern/processes that is simultaneously structured in two ways. The first kind of structure is hierarchical. Simple structures build up to form more complex structures, which build up to form yet more complex structures, and so forth; and the more complex structures explicitly or implicitly govern the formation of their component structures. The second kind of structure is heterarchical: different structures connect to those other structures which are related to them by a sufficient number of pattern/processes. Psychologically speaking, the hierarchical network may be identified with command-structured perception/control, and theheterarchical network may be identified with associatively structured memory. (While the dual network is, intuitively speaking, a fairly simple thing, to give a rigorous definition requires some complex constructions and arbitrary decisions. One approach among many is given in From Complexity to Creativity)

A psynet, then, is a magician system which has evolved into a dual network structure. Or, to place the emphasis on structure rather than dynamics, it is a dual network whose component processes are magicians. The central idea of the psynet model is that the psynet is necessary and sufficient for mind. And this idea rests on the crucial assumption that the dual network is autopoietic for pattern/process magician dynamics.

The psychological meaning of these structures has been elaborated in detail in previous publications and so will not be dwelt on here. Suffice to say that the hierarchical network is prevalent in visual perception, and is notable in the structure of the visual cortex as well as via perceptual psychology. It is also present, though not quite as blatantly, in such areas as the composition of motor actions, and language processing. The heterarchical network, on the other hand, is prevalent in the processing of taste and smell, and in the functioning of associative semantic and episodic memory.

4. Evolution and Autopoiesis in the Network of Mind

The dynamics of the dual network may be understood as a balance of two forces. There is the evolutionary force, which creates new forms, and moves things into new locations. And there is the autopoietic force, which retains things in their present form. If either one of the two forces is allowed to become overly dominant, the dual network will break down, and become excessively unstable, or excessively static and unresponsive.

Of course, each of these two "forces" is just a different way of looking at the basic magician system dynamic. Autopoiesis is implicit in all attractors of magician dynamics, and evolutionary dynamics is a special case of magician dynamics, which involves long transients before convergence, and the possibility of complex strange attractors.

To fully understand the role of evolution here, one must go beyond the standard Darwinian notion of "fitness", and measure fitness in terms of emergent pattern. In The Evolving Mind, I define the structural fitness of an organism as the size of the set of patterns which synergetically emerge when the organism and its environment are considered jointly. If there are patterns arising through the combination of the organism with its environment, which are not patterns in the organism or the environment individually, then the structural fitness is large. Perhaps the easiest illustration is camouflage -- there the appearance of the organism resembles the appearance of the environment, generating the simplest possible kind of emergent pattern: repetition. But symbiosis is an even more convincing example. The functions of two symbiotic organisms match each other so effectively that it is easy to predict the nature of either one from the nature of the other.

The "environment" of a process in the psynet is simply its neighbors in the network. So the structural fitness of a process in the psynet is the amount of pattern that emerges between itself and its neighbors. But, at any given time, the probability of a process not being moved in the network is positively correlated with its degree of "fit" in the associative memory. This degree of fit is precisely the thestructural fitness! So, survival in current position is correlated with structural fitness with respect to immediate environment; and thus, the psynet evolves by natural selection. Furthermore, by the same logic, clusters of magicians may also be understood to evolve by natural selection. This observation leads up to a sense in which the psynet's evolutionary logic is different from that which one sees in ecosystems or immune systems. Namely, in the psynet, every time a process or cluster is moved in accordance with natural selection, certain processes on higher levels are being crossed over and/or mutated.

The issue of evolution is highly relevant to the question of the creativity of mental process networks. The genetic algorithm (Goldberg, 1988; to be discussed in detail in a later chapter) demonstrates the creative potential of the evolutionary process in a wide variety of computational contexts. And the GA is approximated by the activity of subnetworks of the dual network. Subnetworks are constantly mutating as their component processes change. And they are constantly "crossing over," as individual component interactions change in such a way as to cause sub-subnetworks to shift their allegiance from one subnetwork to another.

And, finally, it must be observed that this genetic-algorithm-type creativity, in the hierarchical network, and the evolutionary reorganization of the heterarchical network, are one and the same! When memory items move around from one place to another, seeking a "fitter" home, they are automatically reorganizing the hierarchical network -- causing subnetworks (mental "programs") to cross over and mutate. On the other hand, when processes switch their allegiance from one subnetwork to another, in a crossover-type process, their changing pattern of interaction consitutes a changing environment, which changes their fitness within the heterarchical network. Because the two networks are one, the two kinds of evolution are one. GA-style evolution and ecology are bound together very tightly, much more tightly than in the case of the evolution of species.

Ecology tells us that what is evolving in the mind is not arbitrary forms but self-preserving, autopoietic systems. In order to achieve the full psynet model, one must envision the dual network, not simply as an hierarchy/heterarchy of mental processes, but also as an hierarchy/heterarchy of evolving autopoietic process systems, where each such systems is considered to consist of a "cluster" of associatively related ideas/processes. Each system may relate to each other system in one of three different ways: it may contain that other system, it may be contained in that other system, or it may coexist side-by-side with that other system. The dual network itself is the "grand-dad" of all these autopoietic systems.

Autopoiesis is then seen to play an essential role in the dynamics of the dual network, in that it permits thoughts (beliefs, memories, feelings, etc.) to persist even when the original stimulus which elicited them is gone. Thus a collection of thoughts may survive in the dual network for two reasons:

This is the logic of the mental network, emergent from the underlying network of brain. It is the logic of the set of states of neuronal modules -- and, I claim, it is also the logic of the set of states of agents in the newly complex, increasingly intelligent Internet. But this leads us on....