Chaotic Logic -- Copyright Plenum Press © 1994 |
I believe, so that I may understand
-- Saint Augustine
Believing is the primal beginning even in every sense impression....
-- Friedrich Nietzsche
Are belief systems attractors? There is something quite intuitive about the idea . Before one settles on a fixed system of beliefs, one's opinions regarding a certain issue may wander all over the spectrum, following no apparent pattern. But once one arrives at a belief system regarding that topic, one's opinions thereon are unlikely to vary from a narrow range.
But of course, if one is to declare that belief systems are attractors, one must specify: attractors of what dynamical system? To say "attractors of brain dynamics" is obvious but inadequate: the brain presents us with a system of billions or trillions of coupled nonlinear equations, which current methods are incapable of analyzing even on a qualitative level. If belief systems are to be usefully viewed as attractors, the relevant dynamical iteration must exist on a higher level than that of individual neurons.
In the preceding chapters I have argued that, in order to make headway toward a real understanding of themind, one must shift up from the neural level and consider the structure and dynamics of interacting mental processes or neural maps (Edelman, 1988). Specifically, I have proposed an equation for the evolution of mental processes, and I have suggested that psychological systems may be viewed as subsets of the dual network which are strange attractors of this equation. Now, in this chapter, I will begin the difficult task of relating these formal ideas to real-world psychology -- to discuss the sense in which particular human belief systems may be seen as subsystems of the dual network, and attractors of the cognitive equation.
After giving a simple formalization of the concept of "belief," I will consider the dynamics of belief systems as displayed in the history of science, with an emphasis on Lakatos's structural analysis of research programmes. Then I will turn to a completely different type of belief system: the conspiracy theory of a paranoid personality. By constrasting these different sorts of belief systems in the context of the dual network and the cognitive equation, a new understanding of the nature of rationality will be proposed. It will be concluded that irrationality is a kind of abstract dissociation -- a welcome conclusion in the light of recent work relating dissociation with various types of mental illness (van der Kolk et al, 1991).
Personalities and their associated belief systems are notoriously vague and complicated. It might seem futile to attempt to describe such phenomena with precise equations. But the Church-Turing Thesis implies that one can model anything in terms of computational formulas -- if one only chooses the right sort of formulas. My claim is that the "cognitive law of motion," applied in the context of the dual network model, is adequate for describing the dynamics of mentality. The theory of belief systems given in this chapter and the next is a partial substantiation of this hypothesis.
9.1 SYSTEMATIC BELIEF
In this section I will give abstract, formal definitions for the concepts of "belief" and "belief system." Though perhaps somewhat tedious, these definitions serve to tie in the idea of "belief" with the formal vocabulary introduced in Chapters Two and Three;and they provide a solid conceptual foundation for the more practical considerations of the following sections.
The basic idea is that a belief is a mental process which, in some regard, gives some other mental process the "benefit of the doubt." Recall that, in Chapter Two, I defined an infon as a fuzzy set of patterns. Suppose that a certain process X will place the process s in the associative memory just as if s displayed infon i -- without even checking to see whether s really does display i. Then I will say that X embodies the belief that s displays infon i. X gives s the benefit of the doubt regarding i.
The mental utility of this sort of benefit-giving is obvious: the less processing spent on s, the more available for other tasks. Mental resources are limited and must be efficiently budgeted. But it is equally clear that minds must be very careful where to suspend their doubts.
Next, a test of a belief may be defined as a process with the potential to create an infon which, if it were verified to be present, would decrease the intensity of the belief. In other words, a test of a belief X regarding s has the potential to create an infon j which caused X to give s less benefit of the doubt. Some beliefs are more testable than others; and some very valuable beliefs are surprisingly difficult to test.
Finally, a belief system is a group of beliefs which mutually support one another, in the sense that an increased degree of belief in one of the member beliefs will generally lead to increased degrees of belief in most of the other member beliefs. The systematicity of belief makes testing particularly difficult, because in judging the effect of infon j on belief X, one must consider the indirect effects of j on X, via the effects of j on the other elements of the belief system. But, unfortunately for hard-line rationalists, systematicity appears to be necessary for intelligence. It's a messy world out there!
9.1.1. Formal Definition of Belief (*)
A belief, as I understand it, is a proposition of the form
" s |-- i with degree d"
or, in more felicitous notation,
(s,i;d).
In words, it is a proposition of the form "the collection of patterns labeled i is present in the entity s with intensity d." To say that the individual x holds the belief (s,i;d), I will write
"s |-- i //x with degree d",
or, more compactly,
(s,i,x;d).
Mentally, such a proposition will be represented as a collection of processes which, when presented with the entity s, will place s in the associative memory exactly as they would place an entity which they had verified to contain patterns i with intensity d. A belief about s is a process which is willing to give s the benefit of the doubt in certain regards. This definition is simple and natural. It does not hint at the full psychological significance of belief; but for the moment, it will serve us well.
Next, what does it mean to test a belief? I will say that an infon j is a test of a belief (s,i,x) relative to the observer y, with certainty level e, to degree NM, where
N = the degree to which the observer y believes that the determination of the degree in d(s,j,x) will cause a decrease in d(s,i,x).
M = the amount of effort which the observer y believes will be required to determine the degree that s |-- j holds to within certainty e
I believe that this formal definition, awkward as it is, captures what one means when one makes a statement like "That would be a test of Jane's belief in so and so." It is not an objective definition, and it is not particularly profound, but neither is it vacuous: it serves its purpose well.
Factor N alone says that j is a test of i if y believes that determining whether j holds will affect x's degree of belief that i holds. This is the essence of test. But it is not adequate in itself, because j is not a useful test of i unless it is actually possible to determine the degree to which j holds. This is the purpose of the factor M: it measures the practicality of executing the test j.
To see the need for M, consider the theory, well known among philosophers, that there is some spot on the Earth's surface which has the property that anyone who stands there will see the devil. The only test of this is to stand on every single spot on the earth's surface, which is either impossible or impractically difficult, depending on the nature of space and time.
Or consider Galileo's belief that what one sees by pointing a telescope toward space is actually "out there". Since at that time there was no other source of detailed information as to what was "out there," there was no way to test this belief. Now we have sent men and probes into space, and we have measured the properties of heavenly bodies with radio telescopy and other methods; all these tests have supported Galileo's belief. But it is not hard to see why most of Galileo's contemporaries thought his belief unreasonable.
The role of the "observer" y is simple enough. If one posits an outside, "impartial" observer with access to all possible futures, then one can have an objective definition of test, which measures the degree to which the presence of a certain infon really will alter the strength of a belief. On the other hand, one may also consider the most "partial" observer of all: the belief-holder. It is interesting to observe that, when a certain human belief system appears to be strongly resistant to test, the belief-holders will generally acknowledge this fact just as readily as outside observers.
9.1.2. Systematic Belief (*)
The formal definition of "belief system" is a little bit technical, but the basic idea is very simple: a belief system is a collection of beliefs which are mutually supporting in that a test for any one of them is a test for many of the others. It is permitted that evidence in favor of some of the beliefs may be evidence against some of the others -- that what increases the intensity of belief in A may decrease the intensity of belief in B, where both A and B are in the system. But this must not be the rule -- the positive reinforcement must, on balance, outweigh the negative reinforcement.
To be precise, consider a set of beliefs {A1,...,An}. Let cij = cij(K;y) denote the amount of increase in the degree to which Aj holds that, in the belief of y, will result from an increase by an amount ofK in the degree to which Ai holds. Decrease is to be interpreted as negative increase, so that if y believes that a decrease in the degree to which Aj holds will result from an increase in the degree to which Ai holds by amount, then cij(K;y) will be negative. As with tests, unless otherwise specified it should be assumed that y=x.
Then the coherence C({A1,...,An}) of the set {A1,...,An} may be defined as the sum over all i, j and K of the cij. And the compatibility of a belief B with a set of beliefs {A1,...,An} may be defined as C({A1,...,An,B}) -
C({A1,...,An}).
The coherence of a set of beliefs is the degree to which the various member beliefs support each other, on the average, in the course of the mental process of the entity containing the beliefs. It is not the degree to which the various member beliefs "logically support" each other -- it depends on no system of evaluation besides that of the holder of the beliefs. If I think two beliefs contradict each other, but in your mind they strongly reinforce eachother, then according to the above definition the two beliefs may still be a strongly coherent belief system relative to your mind. It follows that the "same" set of beliefs may form a different dynamical system in two different minds.
Additionally, it is not necessary that two beliefs in the same mind always stand in the same relation to each other there. If A1 contradicts A2 half the time, but supports A2 half the time with about equal intensity, then the result will be a c12 near zero.
If none of the cij are negative, then the belief system is "consistent": none of the beliefs work against eachother. Obviously, consistency implies coherence, though not a high degree of coherence; but coherence does not imply consistency. If some of its component beliefs contradict eachother, but others support eachother, then the coherence of a set of beliefs can still be high -- as long as the total amount of support exceeds the total amount of contradiction.
If a set of beliefs has negative coherence it might be said to be "incoherent." Clearly, an incoherent set of beliefs does not deserve the title "belief system." Let us define a belief system as a set of beliefs which has positive coherence.
The compatibility of a belief B with a belief system measures the expected amount by which the addition of Bto the belief system would change the coherence of the belief system. If this change would be positive, then B has positive compatibility; and if this change would be negative, then B has negative compatibility -- it might be said to be incompatible.
Finally, it must be noted that a given human mind may contain two mutually incompatible belief systems. This possibility reflects the fundamentally "dissociated" (McKellar, 1979) nature of human mentality, whereby the mind can "split" into partially autonomous mental sub-networks. The computation of the coefficients cij may be done with respect to any system one desires -- be it a person's mind, a society, or one component of a person's mind.
9.1.3. Belief and Logic
How does a mind determine how much one belief supports another? In formal terms, how does it determine the "correlation" function cij between belief i and belief j? Should an analysis of belief merely accept these "intercorrelations" a priori, as given products of the believing mind in question? Or is there some preferred "rational" method of computing the effect of a change in the intensity of one belief on the intensity of another?
To see how very difficult these question are, assume for the sake of argument that all beliefs are propositions in Boolean logic. Consider a significantly cross-referential belief system S -- one in which most beliefs refer to a number of other beliefs. Then, as William Poundstone (1989) has pointed out, the problem of determining whether a new belief is logically consistent with the belief system S is at least as hard as the well-known problem of "Boolean Satisfiability," or SAT.
Not only is there no known algorithm for solving SAT effectively within a reasonable amount of time; it has been proved that SAT is NP-complete, which means (very roughly speaking) that if there is such an algorithm, then there is also a reasonably rapid and effective algorithm for solving any other problem in the class NP. And the class NP includes virtually every difficult computational problem ever confronted in a practical situation.
So the problem of determining the consistency of a belief with a significantly cross-referential belief system is about as difficult as any computational problemyet confronted in any real situation. To get a vague idea of how hard this is, consider the fact that, using the best algorithms known, and a computer the size of the known universe with processing elements ths size of protons, each working for the entire estimated lifetime of the universe, as fast as the laws of physics allow, it would not be possible to determine the logical consistency of a belief with a significantly cross-referential belief system containing six hundred beliefs.
It must be emphasized that the problem of making a good guess as to whether or not a belief is logically consistent with a given belief system is an entirely different matter. What is so astoundingly difficult is getting the exact right answer every time. If one allows oneself a certain proportion of errors, one may well be able to arrive at an answer with reasonable rapidity. Obviously, the rapidity decreases with the proportion of error permitted; the rate of this decrease, however, is a difficult mathematical question.
So when a mind determines the functions cij relating its beliefs, it may take logical consistency into account, but it seems extremely unlikely that it can do so with perfect accuracy, for three reasons: 1) based on experience, the human mind does not appear to be terribly logically consistent; 2) the brain is not an exact mechanism like a computer, and it almost certainly works according to rough probabilistic approximation methods; 3) the problem of determining logical consistency is NP-complete and it is hence very unlikely that it has a rapid, accurate solution for any but the smallest belief systems.
Hence it is unreasonable to require that a system of beliefs be "rational" in structure, at least if rationality is defined in terms of propositional logic. And the structural modifications to propositional logic suggested in Chapter Four only serve to make the problem of determining the cij even more difficult. In order to compute anything using the structural definition of implication, one has to compute the algorithmic information contained in various sequences, which is impossible in general and difficult in most particular cases.
From these considerations one may conclude that the determination of the functions cij -- of the structure of a belief system -- is so difficult that the mind must confront it with a rough, approximate method. In particular, I propose that the mind confronts it with a combination of deduction, induction and analogy: that itdoes indeed seek to enforce logical consistency, but lacking an effective general means of doing so, it looks for inconsistency wherever experience tells it inconsistency is most likely to lurk.
9.2 BELIEF SYSTEMS IN THE HISTORY OF SCIENCE
No mind consists of fragmentary beliefs, supported or refuted by testing on an individual basis. In reality, belief is almost always systematic. To illustrate this, let us consider some philosophically interesting examples from the history of science.
In his famous book, The Structure of Scientific Revolutions, Thomas Kuhn (1962) proposed that science evolves according to a highly discontinuous process consisting of 1) long periods of "normal science," in which the prevailing scientific belief system remains unchanged, and new beliefs are accepted or rejected largely on the basis of their compatibility with this belief system, and 2) rare, sudden "paradigm changes," in which the old belief system is replaced with a new one.
According to this analysis, the historical tendency of scientists has been to conform to the prevailing belief system until there suddenly emerges a common belief that the process of testing has yielded results which cannot possibly be made compatible with the old system. This point of revolution is called a "crisis." Classic examples of scientific revolution are the switch from Newtonian mechanics to relativity and quantum theory, and the switch from Ptolemaic to Copernican cosmology. This phenomenon is clearest in physics, but it is visible everywhere.
Kuhn never said much about how belief systems work; he placed the burden of explanation on sociology. Imre Lakatos (1978) was much more specific. He hypothesized that science is organized into belief systems called "research programmes," each of which consists of a "hard core" of essential beliefs and a "periphery" of beliefs which serves as a medium between the hard core and the context. According to this point of view, if A is a belief on the periphery of a research programme, and a test is done which decreases its intensity significantly, then A is replaced with an alternate belief A' which is, though incompatible with A and perhaps other peripheralbeliefs, still compatible with the hard core of the programme.
Admittedly, the distinction between "hard core" and "periphery" is much clearer in retrospect that at the time a theory is being developed. In reality, the presence of a troublesome piece of data often leads to much debate as to what is peripheral and what is essential. Nonetheless, Lakatosian analysis can be quite penetrating.
For instance, consider the Ptolemaic research programme, the analysis of the motions of heavenly bodies in terms of circular paths. One could argue that the "hard core" here contains the belief that the circle is the basic unit of heavenly motion, and the belief that the earth is the center of the universe; whereas initially the periphery contained, among other things, the belief that the heavenly bodies revolve around the earth in circular paths.
When testing refuted the latter belief, it was rejected and replaced with another belief that was also compatible with the hard core: the belief that the heavenly bodies move in "epicycles," circles around circles around the earth. And when testing refuted this, it was rejected and replaced with the belief that the heavenly bodies move in circles around circles around circles around the earth -- and so on, several more times. Data was accomodated, but the hard core was not touched.
Consider next the Copernican theory, that the planets revolve in circles around the sun. This retains part but not all of the hard core of the Ptolemaic belief system, and it generates a new periphery. In Copernicus's time, it was not clear why, if the earth moved, everything on its surface didn't fly off. There were certain vague theories in this regard, but not until around the time of Newton was there a convincing explanation. These vague, dilemma-ridden theories epitomize Lakatos's concept of periphery.
Philosophers of science have a number of different explanations of the transition from Ptolemaic to Copernican cosmology. It was not that the Copernican belief system explained the data much better than its predecessor; in fact, it has been argued that, when the two are restricted to the same number of parameters, their explanatory power is approximately equal (Feyerabend, 1970). It was not that there was a sociological "crisis" in the scientific community; therewas merely a conceptual crisis, which is visible only in retrospect. Extant documents reveal no awareness of crisis.
Was it that the Copernican theory was "simpler"? True, a single circle for each planet seems far simpler than a hierarchy of circles within circles within circles within circles.... But the complexity of the Ptolemaic epicycles is rivalled by the complexity of contemporaneous explanations as to how the earth can move yet the objects on its surface not be blown away. As Feyerabend has rightly concluded, there is no single explanation for this change of belief system; however, detailed historical analysis can yield insight into the complex processes involved.
9.2.1. Belief Generation
Lakatos's ideas can easily be integrated into the above-given model of belief systems. The first step is a simple one: belief in an element of the hard core strongly encourages belief in the other theories of the system, and belief in a theory of the system almost never discourages belief in an element of the hard core. There are many ways to formalize this intuition; for example, given an integer p and a number a, one might define the hard core of a belief system {A1,...,An} as the set of Ai for which the p'th-power average over all j of cij exceeds a. This says that the hard core is composed of those beliefs which many other beliefs depend on.
But unfortunately, this sort of characterization of the hard core is not entirely adequate. What it fails to capture is the way the hard core of a research programme not only supports but actually generates peripheral theories. For instance, the hard core of Newtonian mechanics -- the three laws of motion, and the machinery of differential and integral calculus -- is astoundingly adept at producing analyses of particular physical phenomena. One need merely make a few incorrect simplifying assumptions -- say, neglect air resistance, assume the bottom of a river is flat, assume the mass within the sun is evenly distributed, etc. -- and then one has a useful peripheral theory. And when theperipheral theory is refuted, this merely indicates that another "plausible" incorrect assumption is needed.
There is an old story about a farmer who hires an applied mathematician to help him optimize his productivity. The mathematician begins "First, let us assume a spherical cow...," and the farmer fires him. The farmer thinks the mathematician is off his rocker, but all the mathematician is doing is applying a common peripheral element of his belief system. This peripheral element, though absurd in the context of the farmer's belief system, is often quite effective when interpreted in terms of the belief system of modern science. The peripheral theory seems ridiculous "in itself", but it was invented by the hard core for a certain purpose and it serves this purpose well.
For a different kind of example, recall what Newtonian mechanics tells us about the solar system: a single planet orbiting the sun, assuming that both are spherical with uniform density, should move in an ellipse. But in fact, the orbit of Mercury deviates from ellipticity by approximately 43 seconds of arc every century.
This fact can be accomodated within the framework of Newtonian mechanics, for instance by changing the plausible simplifying assumption of uniform spherical mass distribution -- a step which leads to all sorts of interesting, peripheral mathematical theories. In fact, when all known data is taken into account, Newtonian mechanics does predict a precession, just a smaller precession than is observed. So it is easy to suppose that, with more accurate data, the exact amount of precession could be predicted.
But eventually, General Relativity came along and predicted the exact amount of the precession of Mercury's orbit "from first principles," assuming a uniform, spherical sun. Now the precession of Mercury's orbit is seen as a result of the way mass curves space -- a notion entirely foreign to Newtonian physics. But that's another story. The point, for now, is that the hard core of a theory can suggest or create peripheral theories as well as supporting them.
And indeed, it is hard to see how a belief system could survive sustained experimental attack unless some of its component beliefs came equipped with significant generative power. If a belief system is to defend itself when one of its beliefs is attacked, it must be able to generate compatible new beliefs to take the place of theold. These generative elements will be helpful to the system over the long term only if they are unlikely to be refuted -- and an element is least likely to be refuted if it is strongly supported by other elements of the system. Therefore, systems with generative hard cores are the "hardiest" systems; the most likely to preserve themselves in the face of experimental onslaught.
The idea of a "generative hard core" may be formalized in many different ways; however, the most natural course is to avail ourselves of the theory of self-generating component systems developed in Chapters Seven and Eight. In other words, I am suggesting that a scientific belief system, like a linguistic system, is a self-generating structured transformation system. Belief systems combine these two important system-theoretic structures to form something new, something with dazzling synergetic properties not contain in either structures on its own.
Structured transformation systems unite deduction and analogy in a striking way, via the connection between grammar and semantics which continuous compositionality enforces. Self-generating systems provide an incredible power for unpredictable, self-organizing creativity. Putting the two together, one obtains, at least in the best case, an adaptable, sturdy tool for exploring the world: adaptable because of the STS part, and sturdy because of the self-generation. This is exactly what the difficult task of science requires.
9.2.2. Conclusion
In the history of science one has a record of the dynamics of belief systems -- a record which, to some extent, brings otherwise obscure mental processes out into the open. It is clear that, in the history of science, belief has been tremendously systematic. Consistently, beliefs have been discarded, maintained or created with an eye toward compatibility with the generative "hard cores" of dominant belief systems. I suggest -- and this is hardly a radical contention -- that this process is not specific to scientific belief, but is rather a general property of thought.
I have suggested that scientific belief systems are self-generating structured transformation systems. In the following sections I will make this suggestion yet more specific: I will propose that all belief systems are not only self-generating structured transformationsystems but also attractors for the cognitive equation.
But in fact, this is almost implicit in what I have said so far. For consider: beliefs in a system support one another, by definition, but how does this support take place on the level of psychological dynamics? By far the easiest way for beliefs to support one another is for them to produce one another. But what do the processes in the dual network produce but patterns. Thus a belief system emerges as a collection of mental processes which is closed under generation and pattern recognition -- an attractor for the cognitive equation.
What Lakatos's model implies is that belief systems are attractors with a special kind of structure: a two-level structure, with hard core separate from periphery. But if one replaces the rigid hard core vs. periphery dichotomy with a gradation of importance, from most central to most peripheral, then one obtains nothing besides a dual network structure for belief systems. The hard core is the highest-level processes, the outermost periphery are the lowest-level. Processes are grouped hierarchically for effective production and application; and heterarchically for effective associative reference.
In this way, a belief system emerges as a sort of "mini mind," complete in itself both structurally and dynamically. And one arrives at an enchanting conceptual paradox: only by attaining the ability to survive separately from the rest of the mind, can a belief system make itself of significant use to the rest of the mind. This conclusion will return in Chapter Twelve, equipped with further bells and whistles.
9.3. A CONSPIRATORIAL BELIEF SYSTEM
I have discussed some of the most outstanding belief systems ever created by the human mind: Newtonian mechanics, Galilean astronomy, general relativity. Let us now consider a less admirable system of beliefs: the conspiracy theory of a woman, known to the author, suffering from paranoid delusion. As I am a mathematician and not a clinical psychologist, I am not pretending to offer a "diagnosis" of the woman possessing this belief system. My goal is merely to broaden our conceptual horizons regarding the nature of psychodynamics, by giving a specific example to back upthe theoretical abstractions of the cognitive equation and the dual network.
9.3.1. Jane's Conspiratorial Belief System
"Jane" almost never eats because she believes that "all her food" has been poisoned. She has a history of bulimia, and she has lost twenty-five pounds in the last month and a half; she is now 5'1'' and eighty five pounds. She believes that any food she buys in a store or a restaurant, or receives at the home of a friend, has been poisoned; and when asked who is doing the poisoning, she generally either doesn't answer or says, accusingly, " You know!" She has recurrent leg pains, which she ascribes to food poisoning.
Furthermore, she believes that the same people who are poisoning her food are following her everywhere she goes, even across distances of thousands of miles. When asked how she can tell that people are following her, she either says "I'm not stupid!" or explains that they give her subtle hints such as wearing the same color clothing as her. When she sees someone wearing the same color clothing as she is, she often assumes the person is a "follower," and sometimes confronts the person angrily. She has recently had a number of serious problems with the administration of the college which she attends, and she believes that this was due to the influence of the same people who are poisoning her food and following her.
To give a partial list, she believes that this conspiracy involves: 1) a self-help group that she joined several years ago, when attending a college in a different part of the country, for help with her eating problems; 2) professors at this school, from which she was suspended, and which she subsequently left; 3) one of her good friends from high school.
Her belief system is impressively resistant to test. If you suggest that perhaps food makes her feel ill because her long-term and short-term eating problems have altered her digestive system for the worse, she concludes that you must be either stupid or part of the conspiracy. If you remind her that five years ago doctors warned her that her leg problem would get worse unless she stopped running and otherwise putting extreme pressure on it, and suggest that perhaps her leg would be better if she stopped working as a dancer, she concludes that you must be either stupid or part of the conspiracy. If yousuggest that her problems at school may have partly been due to the fact that she was convinced that people were conspiring against her, and consequently acted toward them in a hostile manner -- she concludes that you must be either stupid or part of the conspiracy.
9.3.2. Jane and the Cognitive Equation
I have analyzed the structure of Jane's conspiracy theory; now how does this relate to the "cognitive equation of motion" given in Chapter Eight. Recall that this equation, in it simplest incarnation, says roughly the following:
1) Let all processes that are "connected" to one another act on one another.
2) Take all patterns that were recognized in other processes during Step (1), let these patterns be the new set of processes, and return to Step (1).
An attractor for this dynamic is then a set of processes X with the property that each element of the set is a) produced by the interaction of some elements of X, b) a pattern in the set of entities produced by the interactions of the elements of X.
In order to show that Jane's belief system is an attractor for this dynamic, it suffices to show that each element of the belief system is a pattern among other elements of the system, and is potentially producible by other elements of the system. Consider, for instance, the seven beliefs
C0: There is a group conspiring against me
C1: My food is poisoned by the conspiracy
C2: My friends and co-workers are part of the conspiracy
C3: My leg pain is caused by the conspiracy
C4: My food tastes bad
C5: My friends and co-workers are being unpleasant to me
C6: My leg is in extreme pain
In the following discussion, it will be implicitly assumed that each of these beliefs is stored redundantly in the brain; that each one is contained in a number of different "neural maps" or "mental processes." Thus, when it is said that C0, C1, C2 and C6 "combine to produce" C3, this should be interpreted to mean that a certain percentage of the time, when these four belief-processes come together, the belief-process C3 is the result.
Furthermore, it must be remembered that each of the brief statements listed above next to the labels Ci is only a shorthand way of referring to what is in reality a diverse collection of ideas and events. For instance, the statement "my co-workers are being unpleasant to me" is shorthand for a conglomoration of memories of unpleasantness. Different processes encapsulating C5 may focus on different specific memories.
Without further ado, then, let us begin at the beginning. Obviously, the belief C0 is a pattern among the three beliefs which follow it. So, suppose that each of the mental processes corresponding to C1, C2 and C3 is equipped with a generalization routine of the form "When encountering enough other beliefs that contain a certain sufficiently large component in common with me, create a process stating that this component often occurs." If this is the case, then C0 may also be created by the cooperative action of C1, C2 and C3, or some binary subset thereof.
One might wonder why the process corresponding to, say, C1 should contain a generalization routine of this type. The only answer is that such routines are of general utility in intelligent systems, and that they add only negligible complexity to a process such as C1 which deals with such formidable concepts as "food" and "conspiracy." In a self-organizing model of the mind, one may not assume that recognitive capacity is contained in a single "generalization center"; it must be achieved in a highly distributed way.
9.3.2.1. Production of Particular Conspiracies
Next, what about C1? Taking C0, C2, C3 and C4 as given, C1 is a fairly natural inference. Suppose the process corresponding to C0 contains a probabilistic generalization routine of the form "The greater the number of events that have been determined to be caused by conspiracy, the more likely it is that event X is caused by conspiracy." Then when C0 combines with C2 and C3, it will have located two events determined to be caused by conspiracy. And when this compound encounters C4, the generalization capacity of C0 will be likely to lead to the creation of a belief such as C1.
So C1 is produced by the cooperative action of these four beliefs. In what sense is it a pattern in the other beliefs? It is a pattern because it simplifies the long list of events that are summarized in the simplestatement "My food is being poisoned." This statement encapsulates a large number of different instances of apparent food poisoning, each with its own list of plausible explanations. Given that the concept of a conspiracy is already there, the attribution of the poisoning to the conspiracy provides a tremendous simplification; instead of a list of hypotheses regarding who did what, there is only the single explanation " They did it." Note that for someone without a bent toward conspiracy theories (without a strong C0), the cost of supplying the concept "conspiracy" would sufficiently great that C1 would not be a pattern in a handful of cases of apparent food poisoning. But for Jane, I(C4|C1,C0) < I(C4|C0). Relative to the background information C0, C1 simplifies C4.
Clearly, C2 and C3 may be treated in a manner similar to C1.
9.3.2.2. Production of Actual Events
Now let us turn to the last three belief-processes. What about C5, the belief that her co-workers are acting unpleasantly toward her? First of all, it is plain that the belief C2 works to produce the belief C5. If one believes that one's co-workers are conspiring against one, one is far more likely to interpret their behavior as being unpleasant.
And furthermore, given C2, the more unpleasant her co-workers are, the simpler the form C2 can take. If the co-workers are acting pleasant, then C2 has the task of explaining how this pleasantry is actually false, and is a form of conspiracy. But if the co-workers are acting unpleasant, then C2 can be vastly simpler. So, in this sense, it may be said that C5 is a pattern in C2.
By similar reasoning, it may be seen that C4 and C6 are both produced by other beliefs in the list, and patterns in or among other beliefs in the list.
9.3.2.3. Jane's Conspiracy as a "Structural Conspiracy"
The arguments of the past few paragraphs are somewhat reminiscent of R.D. Laing's Knots (1972), which describes various self-perpetuating interpersonal and intrapersonal dynamics. Some of Laing's "knots" have been cast in mathematical form by Francisco Varela (1978). However, Laing's "knots" rather glibly treat self-referential dynamics in terms of propositionallogic, which as we have seen is of dubious psychological value. The present treatment draws on a far more carefully refined model of the mind.
It follows from the above arguments that Jane's conspiratorial belief system is in fact a structural conspiracy. It is approximately a fixed point for the "cognitive law of motion." A more precise statement, however, must take into account the fact that the specific contents of the belief-processes Ci are constantly shifting. So the belief system is not exactly fixed: it is subject to change, but only within certain narrow bounds. It is a strange attractor for the law of motion.
Whether it is a chaotic attractor is not obvious from first principles. However, this question could easily be resolved by computer simulations. One would need to assume particular probabilities for the creation of a given belief from the combination of a certain group of beliefs, taking into account the variety of possible belief-processes falling under each general label Ci. Then one could simulate the equation of motion and see what occurred. My strong suspicion is that there is indeed chaos here. The specific beliefs and their strengths most likely fluctuate pseudorandomly, while the overall conspiratorial structure remains the same.
9.3.3. Implication and Conspiracy (*)
As an aside, it is interesting to relate the self-production of Jane's belief system with the notion of informational implication introduced in Chapter Four. Recall that A significantly implies B, with respect to a given deductive system, if there is some chain of deductions leading from A to B, which uses A in a fundamental way, and which is at least as simple as other, related chains of deductions. What interests us here is how it is possible for two entities to significantly imply each other.
Formally, "A implies B to degree K" was written as A -->K B, where K was defined as the minimum of cL + (1-c)M, for any sequence Y of deductions leading from A to B (any sequence of expressions
A=B0,B1,...,Bn=B, where Bi+1 follows from Bi according to one of the transformation rules of the deductive system in question). L was the ratio |B|/|Y|, and M was a conceptually simple but formally messy measure of how much additional simplicity Y provides over those otherproofs that are very similar to it. Finally, c was some number between 0 and 1, inserted to put the quantities L and M on a comparable "scale."
For sake of simplicity, let us rechristen the beliefs C1, C2 and C3 as "F," "W," and "L" respectively. In other words, L denotes the hypothesis that the leg pain is due to a conspiracy, W denotes the hypothesis that the work and social problems are due to a conspiracy, and F denotes the hypothesis that the food problems are due to a conspiracy.
Phrased in terms of implication, the self-generating dynamics of Jane's belief system would seem to suggest
(L and W) -->K(F) F
(F and W) -->K(L) L
(F and L) -->K(W) W
where the degrees K(F), K(L) and K(W) are all non-negligible. But how is this possible?
Let Y(F) denote the "optimal sequence" used in the computation of K(F); define Y(L) and Y(W) similarly. One need not worry about exactly what form these optimal sequences take; it is enough to state that the "deductive system" involved has to do with Jane's personal belief system. Her belief system clearly includes an analogical transformation rule based on the idea that if one thing is caused by a conspiracy, then it is likely that another thing is too, which transforms statements of the form "A is likely caused by a conspiracy" into other statements of the form "A and ___ are likely caused by a conspiracy."
Then, it is clear that L(Y) cannot be large for all of these Y, perhaps not for any of them. For one has
L[Y(F)] = |F|/|Y(F)| < |F|/[|L|+|W|]
L[Y(W)] = |W|/|Y(W)| < |W|/[|L|+|F|]
L[Y(L)] = |L|/|Y(L)| < |L|/[|F|+|W|]
For example, if each of the conspiracy theories is of equal intuitive simplicity to Jane, then all these L(Y)'s are less than 1/3. Or if, say, the work theory is twice as simple than the others, then L[Y(W)] may be close to 1, but L[Y(F)] and L[Y(L)] are less than 1/4. In any case, perhaps sometimes the most "a priori" plausible of the beliefs may attain a fairly large K by having a fairly large L, but for the others a large K must be explained in terms of a large M.
So, recall how the constant M involved in determining the degree K in A -->K B was defined -- as the weighted sum, over all proofs Z of B, of L(Z). The weight attached to Z was determined by I(Z|Y), i.e. by how similar Z is to Y. A power p was introduced into the weight functions, in order to control how little the those Z that are extremely similar to Y are counted.
If M[Y(W)] is large, this means that the theory that a conspiracy is responsible for Jane's work problems is much simpler than other theories similar to it. This can be taken in two ways. If p is very large, then M basically deals only with proofs that are virtually identical to Y. On the other hand, if p is moderate in size, then M will incorporate a comparison of the simplicity granted by Y(W) with the simplicity of true alternatives, such as the theory that Jane herself is responsible for her work problems. Now, to almost any other person, it would be very simple indeed to deduce Jane's work problems from Jane's personality. But to Jane herself, this deduction is not at all intuitive.
So, formally speaking, Jane's circular implication can be seen to come from two sources. First of all, a very large p, which corresponds to a very lenient definition of what constitutes a "natural proof" of something. Or, alternately, a blanket negative judgement of the simplicity of all alternative theories. Both of these alternatives amount to the same thing: excessive self-trust, non-consideration of alternative hypotheses ... what I will call conservatism.
So, in sum, the informational-implication approach has not given us terribly much by way of new insight into Jane's situation. What I have shown, on the other hand, is that Jane's real-life delusional thinking fits in very nicely with the formal theory of reasoning given in Chapter Four. This sort of correspondence between theory and everyday reality is precisely what the standard Boolean-logic approach to reasoning lacks.
9.4 BELIEF AND RATIONALITY
Jane's belief system is clearly, according to the standards of modern "sane" society, irrational. It is worth asking how this irrationality is tied in with the dynamical properties of the belief system, as discussed in the previous section. This investigation will leadtoward a strikingly general dynamical formulation of the concept of rationality.
9.4.1. Conservatism and Irrelevance
The irrationality of Jane's belief system manifests itself in two properties. First of all, Jane is simply too glib in her generation of theories. Given any unpleasant situation, her belief system has no problem whatsoever reeling off an explanation: the theory is always "the conspirators did it." New events never require new explanations. No matter how different one event is from another, the explanation never changes. Let us call this property conservatism.
To put it abstractly, let Es denote the collection of beliefs which a belief system generates in order to explain an event s. That is, when situation s arises, Es is the set of explanatory processes which the belief system generates. Then one undesirable property of Jane's belief system is that the rate of change of Es with respect to s is simply too small.
The second undesirable property of Jane's belief system is, I suggest, that the theories created to explain an event never have much to do with the specific structure of the event. Formally, the collection of patterns which emerge between Es and s is invariably very small. Her belief system explains an event in a way which has nothing to do with the details of the actual nature of the event. Let us call this property irrelevance.
Of course, Jane would reject these criticisms. She might say "I don't need to change my explanation; I've always got the right one!" A dogmatist of this sort is the exact opposite of the prototypical skeptic, who trusts nothing. The skeptic is continually looking for holes in every argument; whereas Jane doesn't bother to look for holes in any argument. She places absolute trust in one postulate, and doesn't even bother to look for holes in arguments purporting to contradict it, for she simply "knows" the holes are there.
This attitude may be most easily understood in the context of the mathematical theory of pattern. The pattern-theoretic approach to intelligence assumes that the environment is chaotic on the level of detailed numerical paramters, but roughly structurally predictable. In Charles S. Peirce's phrase, it assumes that the world possesses a "tendency to take habits."
Under this assumption, it is clear that conservatism and irrelevance and reluctance to test are, in any given case, fairly likely to be flaws. First of all because if change is likely, if old ideas are not necessarily true for the future, then a belief system which does not change is undesirable. And secondly because if induction is imperfect, and the mind works by induction, then one must always face the fact that one's own conclusions may be incorrect.
9.4.2. The Genesis of Delusion
Why, exactly, is Jane's belief system conservative and irrelevant? To answer this, it is convenient to first ask how Jane's mind ever got into the irrational attractor which I have described.
The beginning, it seems, was an instance of C5 and C2: a professor at school was asking her questions relating to her Overeaters Anonymous group, and she came to the conclusion that people were talking about her behind her back. Whether or not this initial conspiracy was real is not essential; the point is that it was nowhere nearly as unlikely as the conspiracies imagined by her later.
Even if no real conspiracy was involved, I would not say that this first step was "unjustified". It was only a guess; and there is nothing unjustified about making a wrong guess. After all, the mind works largely by trial and error. What is important is that Jane's initial belief in a conspiracy was not strongly incompatible with the remainder of her sane, commonsensical mind.
After this, all that were needed were a few instances of C4 or C6, and a few more instances of C5. This caused the creation of some C0 belief-processes; then the feedback dynamics implicit in the analysis of the previous section kicked in. The point is that only a small number of Ci are necessary to start a cybernetic process leading to a vast proliferation. Eventually C0 became so strong that plausible stories about conspiracies were no longer necessary; an all-purpose "them" was sufficient.
Most of us weather unpleasant experiences without developing extravagant conspiracy theories. In the initial stages of its growth, Jane's conspiratorial belief system depended crucially on certain other aspects of Jane's personality; specifically, on her absolute refusal to accept any responsibility for her misfortunes. But once this early phase was past, the spread of her belief system may have had little to do with the remainder of her mind. It may have been a process of isolated expansion, like the growth of a cancer.
9.4.3. Rationality and Dynamics
The lesson is that irrational belief systems are self-supporting, self-contained, integral units. Considered as attractors, they are just as genuine and stable as the belief systems which we consider "normal." The difference is that they gain too much of their support from internal self-generating dynamics -- they do not draw enough on the remainder of the mental process network.
This is perhaps the most objective test of rationality one can possibly pose: how much support is internal, and how much is external? Excessive internal support is clearly inclined to cause conservatism and irrelevance. In this way the irrationality of a person's mind may be traced back to the existence of overly autonomous subattractors of the cognitive equation. The mind itself is an attractor of the cognitive equation; but small portions of the mind may also be attractors for this same equation. When a portion of the mind survives because it is itself an attractor, rather than because of its relations with the rest of the mind, there is a significant danger of irrationality.
Looking ahead to Chapter Twelve, another way to put this is as follows: irrationality is a consequence of dissociation. This formulation is particularly attractive since dissociation has been used as an explanation for a variety of mental illnesses and strange psychological phenomena -- schizophrenia, MPD, post-traumatic stress syndrome, cryptomnesia, hypnosis, hysterical seizure, etc. (Van der Kolk et al, 1991). The general concept of dissociation is that of a "split" in the network of processes that makes up the mind. Here I have shown that this sort of split may arise due to the dynamical autonomy of certain collections of processes.
9.5. MONOLOGUE AND DIALOGUE
Consider once again Galileo's belief that what one sees when one points a telescope out into space is actually there. As noted above, this seems quitereasonable from today's perspective. After all, it is easy to check that when one points a telescope toward an earthbound object, what one sees is indeed there. But we are accustomed to the Newtonian insight that the same natural laws apply to the heavens and the earth; and the common intuition of Galileo's time was quite the opposite. Hence Galileo was going against commonsense logic.
Also, it was said at the time that he was making hypotheses which could not possibly be proven, merely dealing in speculation. Now we see that this objection is largely unfounded; we have measured the heavens with radio waves, we have sent men and robotic probes to nearby heavenly bodies, and the results agree with what our telescopes report. But to the common sense of Galileo's time, the idea of sending men into space was no less preposterous than the notion of building a time machine; no less ridiculous than the delusions of a paranoiac.
Furthermore, it is now known that Galileo's maps of the moon were drastically incorrect; so it is not exactly true that what he saw through his primitive telescopes was actually there!
Galileo argued that the telescope gave a correct view of space because it gave a correct view of earth; however, others argued that this analogy was incorrect, saying "when the telescope is pointed toward earth, everyone who looks through it saw the same thing; but when it's pointed toward space, we often see different things."
Now we know enough about lenses and the psychology of perception to make educated guesses as to the possible causes of this phenomenon, reported by many of those who looked through Galileo's telescopes. But at the time, the only arguments Galileo could offer were of the form "There must be something funny going on either in your eye or in this particular lense, because what is seen through the telescope in the absence of extraneous interference is indeed truly, objectively there." In a way, he reasoned dogmatically and ideologically rather than empirically.
How is Galileo's belief system intrinsically different from the paranoid belief system discussed above? Both ignore common sense and the results of tests, and both are founded on "wild" analogies. Was Galileo's train of thought just as crazy a speculation as Jane's, the only difference being that Galileo was lucky enough to be "right"? Or is it more accurate to saythat, whereas both of them blatantly ignored common logic in order to pursue their intuitions, Galileo's intuition was better than Jane's? I find the latter explanation appealing, but it begs the question: was the superiority of Galileo's intuition somehow related to the structure of his belief system?
Whereas Jane's belief system is conservative and irrelevant, Galileo's belief system was productive. Once you assume that what you see through the telescope is really out there, you can look at all the different stars and planets and draw detailed maps; you can compare what you see through different telescopes; you can construct detailed theories as to why you see what you see. True, if it's not really out there then you're just constructing an elaborate network of theory and experiment about the workings of a particular gadget. But at least the assumption leads to a pursuit of some complexity: it produces new pattern. A conspiracy theory, taken to the extreme described above, does no such thing. It gives you access to no new worlds; it merely derides as worthless all attempts to investigate the properties of the everyday world. Why bother, if you already know what the answer will be?
Call a belief system productive to the extent that it is correlated with the emergence of new patterns in the mind of the system containing it. I suggest that productivity in this sense is strongly correlated with the "reasonableness" of belief systems. The underlying goal of the next few sections is to pursue this correlation, in the context of the dual network and the cognitive equation.
9.5.1 Stages of Development
One often hears arguments similar to the following: "In the early stages of the development of a theory, anything goes. At this stage, it may be advisable to ignore discouraging test results -- to proceed counterinductively. This can lend insight into flaws in the test results or their standard interpretations, and it can open the way to creative development of more general theories which may incorporate the test results. And it may be advisable to think in bizarre, irrational ways -- so as to generate original hypotheses. But once this stage of discovery is completed and the stage of justification is embarked upon, these procedures are nolonger allowed: then one must merely test one's hypotheses against the data."
Of course, this analysis of the evolution of theories is extremely naive: science does not work by a fragmented logic of hypothesis formation and testing, but rather by a systematic logic of research programmes. But there is obviously some truth to it.
I have suggested that two properties characterize a dogmatic belief system:
1) the variation in the structure of the explanations offered with respect to the events being explained is generally small (formally, d[St(Es),St(Et)]/ d#[s,t] is generally small, where d and d# denote appropriate metrics)
2) the nature of explanations offered has nothing to do with the events being explained (formally, Em(Es,s) is generally small)
Intuitively, these conditions -- conservatism and irrelevance -- simply mean that the system is not significantly responsive to test. In light of these criteria, I propose the following fundamental normative rule:
During the developmental stage, a belief system may be permitted to be unresponsive to test results (formally, to have consistently small d[St(Es)-St(Et)]/d#[s,t] and/or Em(Es,s) ). However, after this initial stage has passed, this should not be considered justified.
This is a systemic rendering of the classical distinction between "context of discovery" and "context of justification."
I will call any belief system fulfilling the conditions of non-conservatism and (sic) non-irrelevance a dialogical system. A dialogical system is one which engages in a dialogue with its context. The opposite of a dialogical system is a monological system, a belief system which speaks only to itself, ignoring its context in all but the shallowest respects.
A system which is in the stage of development, but will eventually be dialogical, may be called predialogical. In its early stage of development, a predialogical system may be indistinguishable from a monological one. Pre-dialogicality, almost by definition, can be established only in retrospect. Human minds and societies deal with the problem of distinguishing monologicality from predialogicality the same way they deal with everything else -- by inductionand analogy, by making educated guesses based on what they've seen in the past. And, of course, these analogies draw on certain belief systems, thus completing the circle and destroying any hope of gleaning a truly objective theory of "justification."
The terms "dialogical" and "monological" are not original; they were used by Mikhail Bakhtin in his analysis of Dostoevsky. The reality of Dostoevsky's novels is called "dialogical," meaning that it is the result of significant interaction between different world-views.
His path leads not from idea to idea, but from orientation to orientation. To think, for him, means to question and to listen, to try out orientations.... Even agreement retains its dialogic character ... it never leads to a merging of voices and truths in a single impersonal truth, as in the monologic world.
Each of Dostoevsky's major novels contains a number of conflicting belief systems -- and the action starts when the belief systems become dialogical in the sense defined here. They test each other, and produce creative explanations in response to the phenomena which they provide for each other.
9.5.2. Progressive and Regressive
Lakatos has proposed that good scientific research programmes are "progressive" in that they consistently produce new results which are surprising or dramatic. Bad research programmes are "regressive" in that they do not. This is a valuable analysis, but I don't think it gets to the core of the matter. "Surprising" and "dramatic" are subjective terms; so this criterion doesn't really say much more than "a programme is good if it excites people."
However, I do think that the "monologicity/dialogicity" approach to justification is closely related to Lakatos's notion of progressive and regressive research programmes. It is quite clear that if a system always says the same thing in response to every test, then it is unlikely to give consistently interesting output, and is hence unlikely to be progressive. And I suggest that the converse is also true: that if a system is capable of incorporatingsensitive responses to data into its framework, then it is reasonably likely to say something interesting or useful about the context which generates the data.
Another way to phrase this idea is as follows: in general, dialogicality and productivity are roughly proportional. That is: in the real world, as a rule of thumb, any system which produces a lot of new pattern is highly dialogical, and any system which is highly dialogical produces a lot of new pattern.
The second of these assertions follows from the definition of dialogicality. The former, however, does not follow immediately from the nature of belief systems, but only from the general dynamics of mind; I will return to it in Section 9.7.
9.5.3 Circular Implication Structure
For a slightly different point of view on these issues, let us think about belief systems in terms of implication. Recall the passage above in which I analyzed the origins of Jane's paranoid belief system. I considered, among others, the following triad of implications:
My leg pain and my trouble at work are due to
conspiracies, so my problem with food probably is too
My trouble at work and my problem with food are
due to conspiracies, so my leg pain probably is too
My leg pain and my problem with food are due to
conspiracies, so my trouble at work probably is too
In formulas, I let L denote the hypothesis that the leg pain is due to a conspiracy, W denote the hypothesis that the work problems are due to a conspiracy, and F denote the hypothesis that the food problems are due to a conspiracy, and I arrived at:
(L and W) --> F
(W and F) --> L
(L and F) --> W
(where each implication, in accordance with the theory of informational implication, had a certain degree determined by the properties of Jane's belief system).
The same basic implication structure can be associated with any belief system, not just a conspiratorial belief system. Suppose one has a group of phenomena, and then a group of hypotheses of the form " this phenomenon can be explained by my belief system." These hypotheses will support one another if a large number of implications of the form
this and this and ... this
can be explained by my belief system -->
that can be explained by my belief system
hold with nontrivially high degree. Earlier I reviewed conditions under which a collection of implications of this form can hold with nontrivially high degree. Our conclusion was that a high degree of conservatism is required: one must, when determining what follows what, not pay too much attention to hypotheses dissimilar to those which one has already conceived. If a high degree of conservatism is present, then it is perfectly possible for a group of beliefs to mutually support each other in this manner.
For a very crude and abstract example, consider the belief that the outside world is real, and the belief that one's body is real. One believes the outside world is real because one feels it -- this is G.E. Moore's classic argument, poor philosophy but good common sense ... to prove the external world is really there, kick something! And, on the other hand, why does one believe one's body is real and not an hallucination? Not solely because of one's internal kinesthetic feelings, but rather largely because of the sensations one gets when moving one's hand through the air, walking on the ground, and in general interacting with the outside world.
It doesn't take much acumen to see how these two phenomenological "proofs" fit together. If the outside world were an hallucination, then moving one's body through it would be no evidence for the reality of one's body. One has two propositions supporting one another.
According to the dynamics of the dual network, various belief systems will compete for survival -- they will compete not to have the processes containing their component beliefs reprogrammed. I suggest that circular support structures are an excellent survival strategy, in that they prevent the conception of hypotheses other than those already contained in the belief system.
But opposed to this, of course, is the fact that the conservatism needed to maintain a circular support structure is fundamentally incompatible with dialogicality. Circular support structures and dialogicality are both quality survival strategies, and I suggest that both strategies are in competition in most large belief systems. Dialogicality permits the belief system to adapt to new situations, and circular support structures permit the belief system to ignore new situations. In order to have long-term success, a belief system must carefully balance these two contradictory strategies -- enough dialogicality to consistently produce interesting new pattern, and enough circular support to avoid being wiped out when trouble arises.
The history of science, as developed by Kuhn, Feyerabend, Lakatos and others, shows that in times of crisis scientific belief systems tend to depend on circular support. In the heyday of Newtonian science, there was a little circular support: scientists believed the Newtonian explanation of W partly because the Newtonian explanations of X, Y and Z were so good, and believed the Newtonian explanation X partly because the Newtonian explanations of W, Y and Z were so good, et cetera. But toward the end of the Newtonian era, many of the actual explanations declined in quality, so that this circular support became a larger and larger part of the total evidence in support of each hypothesis of Newtonian explanation.
Circular implication structure is an inevitable consequence of belief systems being attractors for the cognitive equation. But the question is, how much is this attraction relied on as the sole source of sustenance for the belief system? If circular support, self-production, is the belief system's main means of support, then the belief system is serving little purpose relative to the remainder of the mind: it is monological. This point will be pursued in more detail in Chapter Twelve.
9.7. DISSOCIATION AND DIALOGUE
So, in conclusion, a belief system is: 1) a miniature dual network structure,
2) a structured transformation system, 3) an attractor for the cognitive equation.
What does this finally say about the proposed correlation between dialogicality and productivity?
It is, of course, conceivable that a monological system might create an abundance of new pattern. To say that this is highly unlikely is to say that, in actuality, new pattern almost always emerges from significant interaction, from systematic testing. But why should this be true?
The correct argument, as I have hinted above, proceeds on grounds of computational efficiency. This may at first seem philosophically unsatisfying, but on the other hand it is very much in the spirit of pattern philosophy -- after all, the very definition of pattern involves an abstract kind of "computational efficiency."
A monological system, psychologically, represents a highly dissociated network of belief processes. This network does of course interact with the remainder of the mind -- otherwise it would have no effects. But it restricts its interactions to those in which it can play an actor role; it resists being modified, or entering into symbiotic loops of inter-adjustment. This means that when a monological belief system solves a problem, it must rely only, or primarily, upon its own resources.
But the nature of thought is fundamentally interactive and parallel: intelligence is achieved by the complex interactions of different agents. A dialogical belief system containing N modestly-sized processes can solve problems which are of such an intrinsic computational complexity that no excessively dissociated network of N modestly-sized processes can ever solve them. For a dialogical system can solve problems by cooperative computation: by using its own processes to request contributions from outside processes. A monological system, on the other hand, cannot make a habit of interacting intensively with outside processes -- if it did, it would not be monological.
This, I suggest, is all there is to it. Despite the abstract terminology, the idea is very simple. Lousy, unproductive belief systems are lousy precisely because they keep to themselves; they do not make use of the vast potential for cooperative computation that is implicit in the dual network. This is the root of their conservatism and irrelevance. They are conservative and irrelevant because, confronted with the difficult problems of the real world, any belief system of their small size would necessarily be conservative and irrelevant, if it did not extensively avail itself of the remainder of the mind.
All this leaves only one question unanswered: why do monological systems arise, if they are unproductive and useless? The answer to this lies in the cognitive equation. Attractors can be notoriously stubborn. And this leads us onward....