Ben Goertzel
Psychology Department
University of Western Australia
A new, unified theory of consciousness is proposed, beginning from the axiom that consciousness is a force of "pure spontaneity," which appears in the world of structure as randomness. This very simple viewpoint is seen to elucidate issues in the quantum theory of measurement, the neurospsychology of ordinary and altered states of awareness, the evolution of consciousness, and the theory of human and computer creativity.
1. INTRODUCTION
This paper outlines a unified theory which, it is claimed, is in principle capable of accounting for all aspects of the phenomenon of consciousness. This is not a vacuous "hodge- podge" theory formed by piecing together disparate ideas from different disciplines, but rather a theory which begins from very simple philosophical assumptions, and which is, I will argue, extremely useful for resolving concrete problems regarding the nature and behavior of consciousness.
At the core of this theory are two very simple ideas:
1) that consciousness is absolute freedom, pure spontaneity and lawlessness; and
2) that pure spontaneity, when considered in terms of its effects on structured systems, manifests itself as randomness.
These elementary philosophical assumptions, I will show, lead to a host of novel conclusions and hypotheses, including: an improved understanding of the quantum theory of measurement, a new approach to the neuropsychology of ordinary and altered states of awareness, a clarified vision of the evolution of human consciousness, and a new analysis of human and machine creativity.
For sake of convenience, I will refer to this theory of consciousness as the "randomness theory." However, this phrase should not be over- interpreted: to declare that "consciousness is randomness" would be an overstatement, albeit a suggestive one. Consciousness is consciousness only; it is too elemental to be thoroughly expressed in terms of anything else. The correct statement is that, when one seeks to study consciousness by the scientific method, i.e. by detecting patterns in systems, one finds that consciousness is associated with the pattern of patternlessness - - with randomness. Randomness, if not the essence of consciousness, is at least the incarnation which consciousness assumes in the realm of regularities.
In his book Consciousness Explained (1991), Daniel Dennett describes his own theory of consciousness as follows:
Now this appears to be a trick with mirrors, I know. And it certainly is counterintuitive, hard- to- swallow, initially outrageous - - just what one would expect of an idea that could break through centuries of mystery, controversy and confusion.
Dennett's approach to consciousness is quite different from (though not entirely incompatible with) my own; but his words here seem quite appropriate to the randomness theory. My quarrel with Dennett's theory is that - - to borrow Max Born's famous quip about unified field theories - - it is not crazy enough. Dennett's analysis is thoroughly deterministic and is thus nowhere near as "initially outrageous" as he thinks it is. What is needed in order to "break through centuries of mystery, controversy and confusion" is not a reduction of consciousness to deterministic dynamics, but an elucidation of the interplay of determinism and nondeterminism. Or at least, that is the belief underlying the present approach.
In fact, the randomness theory is not entirely new. However, it has been hinted at much more often than it has been openly stated. For instance, Roger Penrose's anti- AI tract The Emperor's New Mind (1988), argues that consciousness and creativity cannot be modeled with computers, but only with uncomputable (i.e. random) numbers. And the consciousness- based approach to quantum measurement, as developed e.g. by Eugene Wigner (1962) and Amit Goswami (1990), implies that the only role of consciousness is to make a random choice. But both of these trains of thought stop short of explicitly associating consciousness with randomness.
By far the closest thing to an explicit statement of the randomness theory of consciousness is to be found in the "scientific metaphysics" of the nineteenth- century American philosopher Charles S. Peirce. As I will demonstrate in detail below, a careful reading of Peirce shows that his animistic philosophy, which he called Tychism (chance- ism), has the randomness theory of consciousness as a corollary.
The randomness theory, I will show, provides a natural explanation for the mysterious appearance of consciousness in the quantum theory of measurement. The problem with the quantum theory of consciousness, as usually formulated, is its lack of any connection with the biology or phenomenology of consciousness. The randomness theory fills in the missing link: it is randomness which collapses the quantum wave- packet, and this same randomness which drives the neural processes underlying the everyday consciousness of objects.
In artificial intelligence, also, the randomness theory provides a missing link: the link between computation and creativity. From its inception, AI has been pursued by the nagging question: "If we build a machine that 'thinks,' where does the awareness come in? How can it be truly creative if it is just following rules, if there is no free will, no reflection, no consciousness of what it is doing?" The randomness theory declares that the consciousness of a machine may be associated with the randomness in its dynamics. Thus, machines made of silicon and steel are in principle just as capable of consciousness as we machines made of biomolecules.
On the face of it, the very ubiquity of randomness might seem to pose a problem for the randomness theory of consciousness. If every particle has some randomness in it, then everything is conscious, and what can be said about the special properties of human consciousness? What this objection ignores, however, is the distinction between consciousness in itself and consciousness of something. I will argue that the sustained consciousness of an object which we humans experience can only be attained by very special mechanisms.
In this view, altered states of consciousness, such as meditation and creative inspiration, can be seen as resulting from subverting the circuitry evolved for object- consciousness, and using it for other purposes. Furthermore, ordinary linguistic thought can be seen as a less extremel version of the same phenomenon. This leads to a reinterpretation of Dennett's (1991) hypothesis that consciousness is a meme (a socially transmitted, self- perpetuating behavior/thought pattern): it is not consciousness itself which is memetic, but rather the refocusing on abstract patterns of circuitry of circuits intended for simple object- consciousness.
As an example of the kind of "special mechanism" required for object- consciousness I will propose a speculative neural mechanism called the "perceptual/cognitive loop" (PCL), which gives one method by which the brain could, conceivably, amplify simple randomness into sustained randomness regarding some specific focus of attention. The PCL ties in with many current ideas in neuroscience, most notably Edelman's (1988) Neural Darwinism and Taylor's (1993) neural network theory of mind. In addition to being a plausible neuropsychological hypothesis, it would seem to be quite interesting as a blueprint for constructing AI programs.
The randomness theory of consciousness may seem to be a mere sophistry, an attempt to probe deep waters with shallow- water instruments. In short, it may appear "too simple to be true." But my conviction is that consciousness really is simple. It is we scientists and philosophers who have made it seem complex, by confusing it with those phenomena that it tends to be associated with. The randomness theory allows consciousness its elemental simplicity, and places the complexity where it belongs: in the various disciplines for which a theory of consciousness has implications.
2. A PEIRCEAN PERSPECTIVE
In this section I will put the randomness theory of consciousness in a philosophical context, by briefly pursuing its roots in the thought of Charles S. Peirce. Peirce's vision of consciousness, while very different in tone and emphasis from my own, is quite similar in content; so that, if the reader does not find my exposition of the randomness theory convincing, she will perhaps find Peirce's formulations more palatable. Peirce's approach gives the theory an extremely systematic air - - rather than just proposing it as an hypothesis unto itself, Peirce derives the randomness theory from his general philosophical system. For those who value philosophical systematics, this may be an important point.
Inspired by Kant's philosophical categories, but frustrated with their byzantine complexity, Peirce introduced three fundamental metaphysical categories: First, Second and Third. As described in his essay on the "Architecture of Theories,"
First is the conception of being or existing independent of anything else. Second is the conception of being relative to, the conception of reaction with something else. Third is the conception of mediation, whereby a first and second are brought into relation.... (1935, p.25)
Illustrative examples of these general conceptions may be drawn from any field of study. For instance:
In psychology, Feeling is First, Sense of reaction Second, General conception Third. (p.26)
This statement implies point (1) of the second paragraph of the Introduction. And one need not delve much further into the concept of Firstness to reach point (2). For instance, consider:
Chance is First, Law is Second, the tendency to take habits is Third. ( p. 26)
This suggests a connection between consciousness and randomness - - a connection which is elsewhere pursued by Peirce quite explicitly.
Consciousness and randomness may seem to be an odd couple, given the simplicity of the psychological idea of consciousness and the abstruseness of the mathematical concept of the "random." But the differences in tone between different fields should not be allowed to confuse fundamental issues. Indeed, it was to forestall precisely such a confusion that Peirce wrote:
[W]hen I speak of chance, I only employ a mathematical term to express with accuracy the characteristics of freedom or spontaneity. (p. 27)
Consciousness is epistemologically primary: it exists on a much simpler level than "randomness." Randomness only exists relative to some structure, some system - - but consciousness, pure freedom and spontaneity, exists independently of any structure; it exists on its own terms. However, once one adopts the point of view of Thirdness, of the world of relation, structure and systematicity, then consciousness assumes the form of randomness. In the language of Peircean semiotics, this means that randomness is the Thirdness of First. Randomness is the guise which raw consciousness assumes when viewed from the perspective of the world of relationship.
A comment on the role of Secondness in Peircean psychology is perhaps in order here. "Thirdness of First" is to be contrasted with the Secondness of First, which is the feeling of changing sensation. Psychologically, Secondness, the feeling of reaction or being- in- the- world, is most closely related to touch and kinesthesia, senses which are direct in the sense of admitting very little representation. By means of paintings or photographs one can give a false impression of looking at sand, but using current technology, to give someone a false impression of feeling sand one has to touch their skin with something very similar to sand. The Secondness of First is exemplified by the feeling of absent- mindedly tracing a finger across an object. As soon as one compares what one is feeling to a memory store of objects, one is involved with Thirdness, and one can ask whether what one is feeling contains recognizable patterns or else is random. But the mere sense of sensory difference, of change, is freedom making itself felt as reaction, as Second.
For a clearer statement on the concept of Firstness one may turn to Peirce's essay on "The Logic of the Universe":
The sense- quality is a feeling. Even if you say it is a slumbering feeling, that does not make it less intense; perhaps the reverse. For it is the absence of reaction - - of feeling another - - that constitutes slumber, not the absence of the immediate feeling that is all that it is in its immediacy. Imagine a magenta color. Now imagine that all the rest of your consciousness - - memory, thought, everything except this feeling of magenta - - is utterly wiped out, and with that is erased all possibility of comparing the magenta with anything else or of estimating it as more or less bright. That is what you must think the pure sense- quality to be. Such a definite potentiality can emerge from the indefinite potentiality only by its own vital Firstness and spontaneity. Here is this magenta color. What originally made such a quality of feeling possible? Evidently nothing but itself. It is a First.
Here the connection with consciousness is made quite explicit. Consciousness, when separated from the apparatus of memory and cognition, is a First. And furthermore,
[W]hatever is First is ipso facto sentient. If I make atoms swerve - - as I do - - I make them swerve but very very little, because I conceive they are not absolutely dead. And by that I do not mean exactly that I hold them to be physically such as the materialists hold them to be, only with a small dose of sentiency superadded. For that, I grant, would be feeble enough. But what I mean is, that all there is, is First, Feelings; Second, Efforts; Third, Habits - - all of which are more familiar to us on their psychical side than on their physical side; and that dead matter would be merely the final result of the complete induration of habit reducing the free play of feeling and the brute irrationality of effort to complete death.
The random swervings of microscopic particles are here associated with the "absolute chance" in the universe, or in other words with the presence of a sentient element even in the supposedly inanimate world. As we shall see, this is a remarkable premonition of the roles of consciousness and randomness in quantum physics (which, at the time Peirce wrote, was still decades in the future).
Somewhat confusingly for the psychologist, Peirce avers that
Mind is First, Matter is Second, Evolution is Third
However, his own very interesting theory of mind clarifies the meaning of this classification. For his theory centers around the Thirdness of mind, in other words the evolution of mental structures and processes:
Logical analysis applied to mental phenomena shows that there is but one law of mind, namely, that ideas tend to spread continuously and to affect certain others which stand to them in a peculiar relation of affectability. In this spreading they lose intensity, and especially the power of affecting others, but gain generality and become welded with other ideas. (p. 87)
In this passage the three categories are intermixed in a brilliant and subtle way: the Law (a Second) of Mind (a First) is the dynamic involving relationship (a Third).
In modern terminology Peirce's "law of mind" might be rephrased as follows: "The mind is an associative memory, and its dynamic dictates that each idea stored in the memory continually acts on those other ideas with which the memory associates it." In (Goertzel, 1993, 1993a, 1994), using ideas from theoretical computer science, I have developed a Peirce- inspired psychology in detail. This psychology will be drawn on occasionally in the following pages, with an eye on a question which Peirce never directly addressed: how exactly does First (consciousness, absolute chance) affect Second (the sense of being in the world) and Third (patterned relation, mind)?
The difference between Thirds or habits, as governed by the "law of mind," and Firsts as experienced by consciousness, is best described in the essay on "Objective Logic." Peirce emphasized
the difference between two kinds of consciousness, the quale- consciousness and that kind of consciousness which is intensified by attention, which objectively considered, I call vividness.... (p. 150)
Peirce's "vividness" is what I call the consciousness of an object. It is a special kind of Third which relates elemental consciousness - - "quale- consciousness" - - with other things in a clever way. And Peirce's "quale- consciousness" is what I prefer to refer to as "raw consciousness": it is simple, unanalyzable, awareness of some entity:
The quale- consciousness is not confined to simple sensations. There is a peculiar quale to purple, though it be only a mixture of red and blue. There is a distinctive quale to every combination of sensations so far as it is really synthetized - - a distinctive quale to every work of art - - a distinctive quale to this moment as it is to me....
Each quale is in itself what it is for itself, without reference to any other. It is absurd to say that one quale in itself is considered like or unlike another.... (p. 152)
Peirce had little to say about the means by which quale- consciousness leads to "consciousness which is intensified by attention." But in his description of the nature of quale- consciousness, he gives certain hints in this direction, which I will take up and develop much further in later sections:
And now I enunciate a truth. It is this. In so far as qualia can be said to have anything in common, that which belongs to one and all is unity; and the various synthetical unities which Kant attributes to the different operations of the mind, as well as the unity of logical consistency ... and also the unity of the individual object, all these unities originate, not in the operations of the intellect, but in the quale- consciousness upon which the intellect operates...
Perhaps it may be thought that hypnotic phenomena show that subconscious feelings are not unified. But I maintain on the contrary that those phenomena exhibit the very opposite peculiarity. They are unified so far as they are brought into one quale- consciousness at all; and that is why different personalities are formed. Of course, each personality is based on a "bundle of habits".... But a bundle of habits would not have the unity of self- consciousness. That unity must be given as a centre for the habits.
The brain shows no central cell. The unity of consciousness is therefore not of physiological origin.... I say then that this unity is logical in this sense, that to feel, to be immediately conscious, so far as possible, without any reaction nor any reflection, logically supposes one consciousness and not two nor more....
In quale- consciousness there is but one quality, but one element. It is entirely simple....
Thus consciousness, so far as it can be contained in an instant of time, is an example of quale- consciousness. Now everybody who has begun to think about consciousness at all has remarked that the present so conceived is absolutely severed from past and future....
So I might express my truth by saying:
The Now is one, and but one. (p. 153)
What Peirce is saying here is something which, as we shall see, puzzles the neuropsychology community even today. Neuropsychologists have proven what Peirce suspected, that "the brain has no central cell," no Cartesian Theater of consciousness. But they have not yet come to grips with the function of this distributed phenomenon of consciousness. Following Peirce, in later sections I will argue that it is consciousness which gives the unity to everyday objects. Consciousness is First, it is unity, and it is also a giver of unity. If ideas and memories have a certain cohesion to them, it is because consciousness has granted them this cohesion.
And finally, on the very next page, Peirce brings chance back into the picture:
So much for the meaning of the proposition. I now call attention to a remarkable consequence of it. Namely it follows that there is no check upon the utmost variety and diversity of quale- consciousness as it appears to the comparing intellect. For if consciousness is to blend with consciousness, there must be common elements. But if it has nothing in itself but just itself, it is sui generis and is cut loose from all need of agreeing with anything. Whatever is absolutely simple must be absolutely free; for a law over it must apply to some common feature of it....
And thus it is that that very same logical element of experience, which appears upon the inside as unity, when viewed from the outside is seen as variety. It is totus, teres, atque rotundus. (p. 154)
In Peirce's vocabulary, variety is synonymous with chance - - he likes to speak of "the infinite diversity of the universe, which we call chance." Chance ensues from elemental freedom. Consciousness appears in the world of structure as the random. And what regulates these chance eruptions is the law of mind, the tendency to take habits. Consciousness and habituation work together to produce the structured diversity of the world.
Despite arriving at the striking insights which I have quoted here so liberally, Peirce never fully developed the randomness theory of consciousness. This was typical of Peirce, who proposed many more ideas than it would have been possible for him to pursue in detail, but it also showed sound judgement, for the science of Peirce's day had little place for a randomness theory of consciousness. Physics was mired in Laplacian determinism, neuroscience was barely existent, and Peirce's friend William James was just laying the foundations for the psychology of altered states of awareness. Today, however, things are quite different: physics, psychology and neuroscience are more than ready to yield a place for a creative, random element. The time for consciousness- as- randomness has come.
2. QUANTUM THEORY AND CONSCIOUSNESS
Peirce, boldly contradicting the deterministic physics of his time, proposed that all atoms contain a chance element which makes them "swerve" a little bit. This chance element he equated with the absolute freedom, spontaneity or consciousness of the atom. Quantum physics has proved Peirce right on both these points: all atoms do swerve a little; and this swerving is, at least in the view of some distinguished scientists, intimately connected with consciousness.
In more modern language, what quantum physics tells us is that an event does not become definite until someone observes it. An unobserved quantum system remains in an uncertain state, a superposition of many different possibilities. Observation causes "collapse" into a definite condition, which is chosen at random from among the possibilities provided. This peculiar but well- established empirical fact makes it natural to associate consciousness with quantum measurement (Wigner, 1962; London and Bauer, 1983; Goswami, 1990).
But the problem with this "quantum theory of consciousness" is that it fails to connect in any obvious way with the biology and psychology of consciousness. One cannot plausibly define consciousness as the collapse of the quantum wave function. A good theory of consciousness must have something to say about the psychology of attention, about the neural processes underlying the subjectively perceived world, and above all about the experience of being a conscious person.
Consider the classic double- slit experiment. A particle passes through one of two slits in a barrier and leaves a mark on a plate on the other side of the barrier, indicating which slit it passed through. If one observes each particle as it passes through the barrier, the marks on the plate will be consistent with one's observations: "Fifteen hundred through the top slit, six hundred and ninety through the bottom slit," or whatever. But if one does not observe the particles passing through the barrier, then something very strange happens. There are marks on the plate where there shouldn't be any - - marks that could not have been made by particles passing through either slit. Instead of passing through the slit like a good particle should, the particle acts as if it were a wave in some mysterious medium, squeezing through the slit and then rapidly diffusing. The key point is whether the particle was looked at or not.
In fact, according to (Wheeler, 1980), this even works if the choice is delayed - - then one has the phenomenon of the "quantum eraser." In other words, suppose one has a machine record which slit each particle passed through. If after a few hours one destroys the machine's records without having looked at them, and only afterwards looks at the plate, then result is the same as if the information had never existed; the plate shows that the particles behaved like waves. But in the same scenario, if one looks at the machine's information before one erases it, the picture on the plate is quite different: it is consistent with whatever the machine said.
Somehow looking at the particle, measuring it, forces it to make a choice between one of the two alternatives, one of the two slits. This choice is a random one: even knowing all there is to know about the physical system, there is no way to predict the path that each individual particle will take, in the eventuality that it is observed. Since the reduction from indeterminacy to definiteness occurs at the point of observation, it is natural to posit that consciousness itself is the crucial factor, the agent which forces a choice. This idea is doubly appealing since consciousness is often psychologically associated with choice or decision. In many cases we only become conscious of something when appropriate unconscious processes judge that some sort of complex decision in its regard is required (Mandler, 1985).
The quantum measurement problem poses a serious conceptual dilemma. If the dynamical equations of quantum theory are taken literally, nothing is ever in a definite state; everything is always suspended in a superposition of various possibilities. But yet that's not what we see in the world around us - - neither in the physics lab nor in everyday life. When then does the superposed world become the actual world? When it is recorded by a machine? When it is recorded by a person? What about an intelligent machine ... or an observant chimpanzee, dog, mouse, or ant?
Another way to think about the measurement dilemma is to introduce the concept of prediction. Intelligence works by predicting the future; quantum physics, however, tells us that prediction is not so easy as it might seem. In the commonsense view of things, if one wants to predict the probability of event A or event B occurring, and the two events are mutually exclusive, the following algorithm suffices: first, determine the probability of A occurring; then, determine the probability of B occurring; then, finally, add these two probabilities. In quantum physics this prediction method does not work; the probability of "A or B" is not determined by the individual probabilities of A and B, even if A and B are mutually exclusive. To see why, suppose that A and B refer to the double slit experiment: A is the event that a particle marks the recording plate as though it passed through the top slit, and B is the event that it marks the recording plate as though it passed through the bottom one. Then the chance of "A or B" is in general less than the sum of the chance of A with the chance of B - - because sometimes an unobserved particle will land in a place which is not compatible with its having passed through either slit. But, on the other hand, if the particle is observed then the ordinary probability formula works.
One may say that consciousness causes patterns to reduce from uncertain, superimposed states into definite states. Or, using a less mysterious language, one may say that consciousness changes the algorithm that must be used in order to correctly predict the patterns in the future. Consciousness introduces a random change in the correct prediction algorithm.
From Particles to Patterns
The quantum measurement problem has inspired some very radical theories - - most notably the "many- worlds hypothesis," which states that every point of random choice creates several parallel universes, one corresponding to each possible decision. But perhaps the most convincing idea yet proposed is the statistical theory of measurement. In compressed form, the key idea of this approach is simply that measurement has to do with the statistical coupling of the measuring system and the object being measured.
The physicist Asher Peres has been one of the most ardent advocates of this point of view. In one of his many papers on the topic, Peres (1986) caps off a long discussion of the thermodynamics of measuring devices by concluding that
A measuring apparatus must have macroscopically distinguishable states, and the word macroscopic has just been defined as "incapable of being isolated from the enviroment."
Peres's thermodynamic arguments show that what is physically meant by "macroscopic" is nothing other than "statistically coupled with the environment." But a measurement device is defined as something with macroscopic states. Therefore, measurement is conceptually bound up with statistical correlation. A more concrete, less detailed formulation of the same idea was given by none other than Richard Feynman, in a letter to a friend:
Proposal: only those properties of a single atom can be measured, which can be correlated (with finite probability) with an unlimited number of atoms.
The key to understanding this thermodynamic, statistical point of view is, I propose, to abstract it from the technical language of thermodynamical "correlations." A correlation is, in essence, a way of predicting the behavior of a whole group of entities from the behavior of a small subset of the group. In other words, a correlation is a regularity; it is a pattern. A correlation in a collection of particles is a pattern in that collection. This brings us very neatly back to Peirce's three categories. The statistical theory of quantum measurement has discovered that particles are Third. They are not merely solid objects "out there," not merely acting and reacting Seconds; they are relations, they are patterns, they are Thirds. Think about it. "Only those properties of a single atom can be measured which can be correlated with an unlimited number of atoms." This implies that every property of a single atom which can be measured is actually a pattern emergent between the atom and other atoms. And how can one tell if a group of atoms are statistically correlated? Only by measuring them! But if measuring means detecting a statistical correlation - - then it follows that the atoms themselves are never directly measured, only certain "properties" that are in fact statistical correlations among large groups of atoms.
So, in sum: the statistical view of Peres and Feynman implies that only emergent patterns can be measured - - that physical reality is known only through its emergent patterns. It also says something more specific than this: it says that only certain emergent patterns, namely correlations, are measurable. But the important point here is that correlations are indeed emergent patterns.
What this leads up to is a radical understanding of the relation between mind and reality. For, suppose one starts with the "functionalist" or "pragmatist" view that the mind is made of patterns in the brain (Putnam, 1975; Goertzel, 1993). And suppose one appends to this view the idea that all we can physically measure are emergent patterns. What follows is that physical reality is in no fundamental way separate from mental reality. Insofar as we can measure it, physical reality is just a certain subset of the collection of patterns that makes up the mind. Or, as Peirce put it, "Matter is mind hide- bound with habit." The patterns that make up the physical world are obviously much less mutable than those which make up the remainder of the mind. But this is a quantitative rather than a qualitative difference. To use the language of Scholastic philosophy, it is not a difference in "essence."
What does all this have to do with consciousness? The crucial question is: if mind and matter are at bottom one, then how much sense does it make to distinguish between the way consciousness acts on mind and the way consciousness acts on matter? There might, in principle, be a large variation in the way consciousness treats different types of patterns. But as a first hypothesis, it is most sensible to assume that this variation is not so large, and to look for one basic principle of consciousness, spanning the physical and psychic domains.
In Peircean terms, this argument may be rephrased as follows. If both particles and thoughts are at bottom Thirds, then consciousness as it appears in physics must be the same as consciousness as it appears in psychology: both manifestations of consciousness are Thirdness of First. Newtonian physics portrayed the physical world as a world of Secondness; hence the appearance of consciousness in the physical world would have had to be a Secondness of First, something different than the psychological appearance of consciousness. But quantum physics placed the material world where Peirce already knew it belonged: along with ideas, in the world of Thirdness.
In quantum physics consciousness manifests itself as a change in the correct prediction algorithm. This is a case of pure chance appearing in a structured context as statistical and algorithmic randomness. The random choice of A or B affects the course of evolution of a physical system, a system of patterns; this random choice is not, in itself, consciousness, but it is one physically observable manifestation of the absolute spontaneity that is consciousness.
3. THE NEUROPSYCHOLOGY OF ATTENTION
Quantum theory approaches consciousness from the unlikely perspective of microscopic physics. A more direct approach to consciousness is via the biology of the brain. For decades neuropsychologists have shunned the word "consciousness," preferring the less controversial, more technical term "attention." But despite the methodological conservatism which this terminology reflects, there has been a great deal of excellent work on the neural foundations of conscious experience. In particular, two recent discoveries in the neuropsychology of attention stand out above all others. These discoveries correspond to two observations made by Peirce in the passages quoted above. Peirce argued that "The brain shows no central cell. The unity of consciousness is therefore not of physiological origin...." Now neuropsychologists have shown that, indeed, conscious processes are distributed throughout the brain, not located in any single nexus. And Peirce proclaimed that "the unity of logical consistency ... and also the unity of the individual object ... not in the operations of the intellect, but in the quale- consciousness upon which the intellect operates." Now neuropsychologists have shown that the role of consciousness in perception and cognition is precisely that of grouping, of forming wholes.
In short, what in Peirce's time was mere speculation is now fact; and this places Peirce's scientific arguments for a randomness theory of consciousness on an even stronger footing. These biological ideas are not necessary for the randomness theory of consciousness, but they are an interesting addition to the more philosophical arguments. So, let us briefly review some of the recent neuropsychological developments.
According to Rizzolati and Gallese (1988), there are two basic ways of approaching the problem of attentiveness. The conventional approach rests on two substantial claims:
1) that in the brain there is a selective attention center or circuit independent of sensory and motor circuits; and
2) that this circuit controls the brain as a whole.... (p. 240)
In its most basic, stripped- down form this first claim implies that there are some brain regions exclusively devoted to attention. But there are also more refined interpretations: "It may be argued ... that in various cerebral areas attentional neurons can be present, intermixed with others having sensory or motor functions. These attentional neurons may have connections among them and form in this way an attentional circuit" (p.241).
This view of attention alludes to what Dennett (1991) calls the "Cartesian Theater." It holds that there is some particular place at which all the information from the senses and the memory comes together into one coherent picture, and from which all commands to the motor centers ultimately emanate. Even if there is not a unique spatial location, there is at least a single unified system which acts as if it were all in one place.
Rizzolati and Gallassi contrast this conventional view with their own "premotor" theory of attention, of which they say:
First, it claims that ... attention is a vertical modular function present in several independent circuits and not a supramodal function controlling the whole brain. Second, it maintains that attention is a consequence of activation of premotor neurons, which in turn facilitates the sensory cells functionally related to them.
The second of these claims is somewhat controversial - - many would claim that the sensory rather than premotor neurons are fundamental in arousing attention. However, as Rizzolati and Gallassi point out, the evidence in favor of the first point is extremely convincing. For instance, Area 8 and inferior Area 2 have no reciprocal connections - - and even in their connections with the parietal lobe they are quite independent. But if one lesions either of these areas, severe attentional disorders can result, including total "neglect" of (failure to be aware of) some portion of the visual field.
There are some neurological phenomena which at first appear to contradict this "several independent circuits" theory of consciousness. But these apparent contradictions result from a failure to appreciate the self- organizing nature of brain function. For instance, as Rizzolatti et al (1981) have shown, although the neurons in inferior Area 6 are not responsive to emotional stimuli, nevertheless a lesion in this area can cause an animal to lose its ability to be aware of emotional stimuli. But this does not imply the existence of some brain- wide consciousness center. It can be better explained by positing an interdependence between Area 6 and some other areas responsive to the same environmental stimuli and also responsive to emotional stimuli. When one neural assembly changes, all assemblies that interact with it are prodded to change as well. Consciousness is part of the self- structuring process of the brain; it does not stand outside this process.
So consciousness is distributed rather than unified. But what does neuropsychology tell us about the role of consciousness? It tells us, to put it in a formula, that consciousness serves to group disparate features into coherent wholes. This conclusion has been reached by many different researchers working under many different theoretical presuppositions. There is no longer any reasonable doubt that, as Umilta (1988) has put it, "the formation of a given percept is dependent on a specific distribution of focal attention."
For instance, Treisman and Schmidt (1982) have argued for a two- stage theory of visual perception. First is the stage of elementary feature recognition, in which simple visual properties like color and shape are recognized by individual neural assemblies. Next is the stage of feature integration, in which consciousness focuses on a certain location and unifies the different features present at that location. If consciousness is not focused on a certain location, the features sensed there may combine on their own, leading to the perception of illusory objects.
This view ties in perfectly with what is known about the psychological consequences of various brain lesions. For instance, the phenomenon of hemineglect occurs primarily as a consequence of lesions to the right parietal lobe or left frontal lobe; it consists of a disinclination or inability to be aware of one or the other side of the body. Sometimes, however, these same lesions do not cause hemineglect, but rather delusional perceptions. Bisiach and Berti (1987) have explained this with the hypothesis that sometimes, when there is damage to those attentional processes connecting features with whole percepts in one side of the visual field, the function of these processes is taken over by other, non- attentional processes. These specific consciousness- circuits are replaced by unconscious circuits, but the unconscious circuits can't properly do the job of percept- construction; they just produce delusions. And this sort of phenomenon is not restricted to visual perception. Bisiach et al (1985) report a patient unable to perceive meaningful words spoken to him from the left side of his body - - though perfectly able to perceive nonsense spoken to him from the same place.
Psychological experiments have verified the same phenomenon. For instance, Kawabata (1986) has shown that one makes a choice between the two possible orientations of the Necker cube based on the specific point on which one first focuses one's attention. Whatever vertex is the focus of attention is perceived as in the front, and the interpretation of the whole image is constructed to match this assumption. Similar results have been found for a variety of different ambiguous figures - - e.g. Tsal and Kolbet (1985) used pictures that could be interpreted as either a duck or a rabbit, and pictures that could be seen as eithr a bird or a plane. In each case the point of conscious attention directed the perception of the whole. And, as is well known in such cases, once consciousness has finished forming the picture into a coherent perceived whole, this process is very difficult to undo.
Treisman and Schmidt's division of perception into two levels is perhaps a little more rigid than the available evidence suggests. For instance, experiments of Prinzmetal et al (1986) verify the necessity of consciousness for perceptual integration, but also point out some minor role for consciousness in enhancing the quality of perceived features. But there are many ways of explaining this kind of result. It may be that consciousness acts on more than one level: first in unifying sub- features into features, then in unifying features into whole objects. Or it may be that perception of the whole causes perception of the features to be improved, by a sort of feedback process.
So, consciousness is a kind of dynamic which takes place independently in many parts of the brain at once. It groups disparate features into unified wholes. But what does this have to do with randomness? This is where the idea of "consciousness as randomness affecting structure" leads to a novel, specific neuropsychological hypothesis. Neuroscience does not yet tell us how the distributed process of consciousness actually goes about the task of creating coherent wholes. The theory of consciousness as randomness provides one suggestion regarding how to fill this gap. It suggests an "iterative algorithm of perceptual consciousness."
The Perceptual/Cognitive Loop
Consciousness, according to the neuropsychological results discussed above, might seem to be the exact opposite of randomness. After all, randomness destroys order; consciousness imposes it. But this contradiction, I suggest, is only apparent. Edelman (1990) has proposed that consciousness consists of a feedback loop from the perceptual regions of the brain to the "higher" cognitive regions. In other words, consciousness is a process which cycles information from perception to cognition, to perception, to cognition, and so forth (in the process continually creating new information to be cycled around).
Taking this view, one might suppose that the brain lesions discussed above hinder consciousness, not by destroying an entire autonomously conscious neural assembly, but by destroying the perceptual end of a larger consciousness- producing loop, a perceptual/cognitive loop or PCL. But the question is: why have a loop at all? Are the perceptual processes themselves incapable of grouping features into wholes; do they need cognitive assistance?
The cognitive end of the loop, I suggest, serves largely as a tester and controller. The perceptual end does some primitive grouping procedures, and then passes its results along to the cognitive end, asking for approval: "Did I group too little, or enough?" The cognitive end seeks to integrate the results of the perceptual end with its knowledge and memory, and on this basis gives an answer. It gives the answer "too little" if the proposed grouping is simply torn apart by contact with memory - - if different parts of the supposedly coherent percent connect with totally different remembered percepts, whereas the whole connects significantly with nothing. And when the perceptual end receives the answer "too little," it goes ahead and tries to group things together even more, to make things even more coherent. Then it presents its work to the cognitive end again. Eventually the cognitive end of the loop answers: "Enough!" Then one has an entity which is sufficiently coherent to withstand the onslaughts of memory.
Next, there is another likely aspect to the perceptual/cognitive interaction: perhaps the cognitive end also assists in the coherentizing process. Perhaps it proposes ideas for interpretations of the whole, which the perceptual end then approves or disapproves based on its access to more primitive features. This function is not in any way contradictory to the idea of the cognitive end as a tester and controller; indeed the two directions of control fit in quite nicely together.
Note that a maximally coherent percept is not desirable, because thought, perception and memory require that ideas possess some degree of flexibility. The individual features of a percept should be detectable to some degree, otherwise how could the percept be related to other similar ones? The trick is to stop the coherence- making process just in time.
But what exactly is "just in time"? There is not necessarily a unique optimal level of coherence. It seems more likely that each consciousness- producing loop has its own characteristic level of cohesion. Hartmann (1991) has proposed a theory which may be relevant to this issue: he has argued that each person has a certain characteristic "boundary thickness" which they place between the different ideas in their mind. Based on several questionnaire and interview studies, he has shown that this is a statistically significant method for classifying personalities. "Thin- boundaried" people tend to be sensitive, spiritual and artistic; they tend to blend different ideas together and to perceive a very thin layer separating themselves from the world. "Thick- boundaried" people, on the other hand, tend to be practical and not so sensitive; their minds tend to be more compartmentalized, and they tend to see themselves as very separate from the world around them. Hartmann gives a speculative account of the neural basis of this distinction. But the present theory of consciousness suggests an alternate account: that perhaps this distinction is biologically based on a difference in the "minimum cohesion level" accepted by the cognitive end of consciousness- producing loops.
Where does randomness enter into the picture, then? My claim is that this whole apparatus, this perceptual/cognitive loop, is not consciousness itself, but only a common correlate of consciousness in the human brain. When a potential "coherent object" is passed to the cognitive centers for testing, it is tested by allowing objects outside it to disrupt it. When they can disrupt it only very little, then the coherentization is accepted. This disruption, I suggest, is precisely our consciousness of the object. It is an invasion of the object by forces which, from the perspective of its own internal structure, are quite alien and random.
The iterative processing of information by the perceptual/cognitive loop is what enables the same object to be disrupted by randomness again and again and again. And this, I claim, is what gives the feeling that one is conscious of some specific object. Without this iteration, consciousness is felt to lack a definite object; the object in question lasts for so short a time that it is just barely noticeable.
And what of the old aphorism, "Consciousness is consciousness of consciousness"? This reflexive property of consciousness may be understood as a consequence of the passage of potential coherentizations from the cognitive end to the perceptual end of the loop. The cognitive end is trying to understand what the perceptual end is doing; it is recognizing patterns in the series of proposed coherentizations and ensuing memory- caused randomizations. These higher- order patterns are then sent through the consciousness- producing loop as well, in the form of new instructions for coherentization. Thus the process that produces consciousness itself becomes transformed into an object of consciousness.
All this ties in quite nicely with the neural network theory of consciousness proposed by the British mathematician R. Taylor (1993). Taylor proposes that the consciousness caused by a given stimulus can be equated with the memory traces elicited by that stimulus. E.g. the consciousness of a sunset is the combination of the faint memory traces of previously viewed sunsets, or previously viewed scenes which looked like sunsets, etc. The PCL provides an analysis on a level one deeper than Taylor's theory; in other words, Taylor's theory is a consequence of the one presented here. For, if the PCL works as I have described, it follows that the cognitive end must always search for memory traces similar to the "stimulus" passed to it by the perceptual end.
Peirce distinguished "quale- consciousness" from "that kind of consciousness which is intensified by attention." It is the latter kind of consciousness to which neuropsychology has devoted its attention. One of the most important points of the randomness theory of consciousness is the sharp distinction which it draws between these two forms of consciousness. Raw consciousness, consciousness as First, is something much simpler than and quite different from the sustained consciousness of a particular object. Our brains amplify raw consciousness, pure spontaneity, in such a way that it "appears" to focus itself on a single object - - a lamp, a chair, a feeling of hatred, a memory from childhood, a certain musical note, even itself. I have proposed one mechanism by which this amplification could occur. But the question of the nature of this mechanism is entirely different from the question of the essential nature of consciousness.
What is Coherentization?
There is a missing link in the above account of the PCL: what, exactly, is this mysterious process of "coherentization," of boundary- drawing? Is it something completely separate from the ordinary dynamics of the mind? Or is it, on the other hand, an extension of these dynamics?
The chemistry involved is still a question mark, so the only hope of understanding coherentization at the present time is to bypass neuropsychology and try to analyze it from a general, philosophical perspective. One may set up a model of "whole objects" and "whole concepts," and ask: in the context of this model, what is the structure and dynamics of coherentization? In the Peircean perspective, both objects and ideas may be viewed as "habits" or "patterns," which are related to one another by other patterns, and which have the capacity to act on and transform one another. We then have the question of how a pattern can be coherentized, how its "component patterns" can be drawn more tightly together to form a more cohesive whole.
A clearer picture of the issues involved may be attained by giving these abstract "patterns" a more biological form. For this purpose, as in (Goertzel, 1993, 1993a, 1994), let us take Gerald Edelman's (1987) theory of neuronal group selection or "Neural Darwinism." Like many others before him, going back at least to Hebb (1949), Edelman identifies the fundamental biological unit of thought as the neural assembly or neuronal group. A single neuron contains little information; a group of tightly interlinked neurons, however, has a complex internal connection structure that controls its behavior in an interesting way.
Neuronal groups are the blocks from which things mental are made. Thoughts, percepts and actions, according to Edelman, are maps of neuronal groups - - collections of interlinked neuronal groups. Connections between neuronal groups are differentially reinforced based on usefulness; therefore neural maps can be said to evolve by natural selection. A neural map can survive the selective process by being constantly strongly stimulated by other neural maps. Or, on the other hand, it can survive by being very good at stimulating itself. My claim is that the latter case is the usual one. Those neural maps which survive are, by and large, the ones that are interconnected in a self- supporting way. And the same, I have suggested (Goertzel, 1994), holds on the next level up, a level at which Edelman only hints: maps of neural maps. Self- supporting neural maps are connected to form self- supporting second- order neural maps. The key property of a percept, concept or action, I propose, is its high degree of self- sustenance.
Another way to phrase this point is in the language of dynamical systems theory. The dynamical law of Neural Darwinism has two parts: 1) that neuronal groups stimulate other neuronal groups; and 2) that the connections between neuronal groups are strengthened by use or weakened by disuse. A self- sustaining neural map is a collection of neuronal groups which is an attractor of this Neural Darwinist dynamic (or at least, approximately an attractor). It may be a "fixed point" attractor, meaning that its condition remains always the same; or it may be a "limit cycle" attractor, meaning that its condition regularly oscillates over a range of values. More likely, however, is that real neural maps are "strange attractors" - - that they vary chaotically between different microstates, but that these microstates all stay within a certain complexly- contoured region of state space, the "attractor region."
So, if concepts, percepts and actions are attractors of Neural Darwinist dynamics, then what does it mean to coherentize, say, a percept? It means, quite simply: to replace it with an approximately equivalent percept which is more completely self- producing, which is more of an attractor. For in the Neural Darwinist view, those percepts which are most likely to survive in the evolving pool of neural maps, are those which receive the most external stimulation, and those which perpetuate themselves the best. External stimulation is difficult to predict, but the tendency toward self- perpetuation can be built in; and this, in the Neural Darwinist model, is the most natural meaning for coherentization.
In this view, then, what the perceptual/cognitive loop does is to take a network of processes and iteratively make it more self- producing. Any mathematical function can be reproduced by a great number of neural maps; some of these maps will be highly self- producing and others will not. The trick is to find these self- producing maps. There are many possible strategies for doing this kind of search - - but one may be certain that the brain, if it does this kind of search, certainly does not use any fancy mathematical algorithm. It must proceed by a process of guided trial- and- error, and thus it must require constant testing to determine the degree of self- production of the current iterate. The consciousness, in this model, is in the testing, which disrupts the overly poor self- production of interim networks (the end result of the iterative process is something which leads to fairly little consciousness, because it is relatively secure against disruption by outside forces).
So, "coherentization" is not a catch- word devoid of content; it is a concrete dynamic, which can be understood as a peculiar chemical process, or else modeled in a system- theoretic way. Thinking about neuronal groups yields a particularly elegant way of modeling coherentization: as the search for an attractor of the Neural Darwinist dynamical system, which integrates a given collection of perceptual features. This gives a very concrete way of thinking about the coherentization of a complex pattern. To coherentize a pattern which is itself a system of simpler patterns, emerging cooperatively from each other, one must replace the component patterns with others that, while expressing largely the same regularities, emerge from each other in an even more tightly interlinked way.
4. THE EVOLUTION OF CONSCIOUSNESS
The theory of the perceptual/cognitive loop has the potential to shed some light on the question of the history of human consciousness. It is important to make a distinction here: it is absurd to speak of the "evolution" of raw consciousness, of consciousness as First. Raw consciousness is about as primal as anything can get: it exists to some small extent wherever there is any kind of structure. But the consciousness of objects, the type of consciousness most commonly discussed and analyzed, is not primal in the same way. It is not present at all in the rock, bacterium, the worm, or the tree; and while it is clearly there in the chicken, lizard and cat (to choose some animals at random), only in us does it achieve the high degree of flexibility and effectiveness that we commonly associate with it. Clearly, the consciousness of objects evolved. The question is: when, and how?
Julian Jaynes (1976) has argued that consciousness evolved suddenly rather than rapidly, and that this sudden evolution occurred in the very recent past. He believes that the humans of Homeric times were not truly conscious in the sense that we are. His argument is based primarily on literary evidence: the characters in the Odyssey never speak of an "inner voice" of consciousness. Instead they refer continually to the voices of the gods. Jaynes proposes that this "hearing of voices," today associated with schizophrenia, was in fact the root of modern consciousness. Eventually the voice was no longer perceived as a voice, but as a more abstract inner guiding force, in other words "consciousness."
Jaynes' theory is admirable in its elegance and boldness; unfortunately, however, it makes very little scientific sense. Inferring the history of mind from the history of literary style is risky, to say the least; and Jaynes' understanding of schizophrenia does not really fit with what we know today. But despite the insufficiency of his arguments, I believe there is a kernel of truth in Jayne's ideas. I will argue that the idea of a sudden appearance of modern consciousness is quite correct - - but for very different reasons than those which Jaynes put forth.
My argument is quite simple. The perceptual/cognitive loop relies on two abilities: the perceptual ability to recognize elementary "features" in sense data, and the cognitive ability to link conjectural "wholes" with items in memory. A sudden jump in either one of these abilities could therefore lead to a sudden jump in consciousness. I have argued elsewhere (Goertzel, 1993a,b) that the memory at some point underwent a sudden structural "phase transition." I suggest that this transition, if it really occurred, would have caused as a corollary effect a sudden increase in consciousness.
The argument for a phase transition in the evolution of memory rests on the Peircean vision of mind as an associative memory, with connections determined by habituation. Based on this simple foundation, it requires only certain commonplace results from the theory of random graphs. Suppose one takes N items stored in some organism's memory, and considers two items to be "connected" if the organism's mind has detected pragmatically meaningful relations between them. Then, if the memory is sufficiently complex, one may study it in an approximate way by assuming that these connections are drawn "at random." But in this approximation, the question arises: what is the chance that, given two memory items A and B, there is a connection between A and B? If this chance exceeds the value 1/2N, then the memory is almost surely a "nearly connected graph," in the sense that one can follow a chain of associations from almost any memory item to almost any other memory item. On the other hand, if this chance is less than 1/2N, then the memory is almost certainly a "nearly disconnected graph": following a chain of associations from any one memory item will generally lead only to a small subset of "nearby" memory items. There is a "phase transition" as the connection probability passes 1/2N. The evolutionary hypothesis, then, is this. Gradually, the brain became a better and better pattern recognition machine; and as this happened the memory network became more and more densely connected. In turn, the more effective memory became, the more useful it was as a guide for pattern recognition. Then, all of a sudden, pattern recognition became useful enough that it gave rise to a memory past the phase transition. Now the memory was really useful for pattern recognition: pattern recognition processes were able to search efficiently through the memory, moving from one item to the next to the next along a path of gradually increasing relevance to the given object of study. The drastically increased pattern recognition ability filled the memory in even more - - and all of a sudden, the mind was operating on a whole new level.
And one consequence of this "new level" of functioning may have been - - an effective perceptual/cognitive loop. In a mind without a highly active associative memory, there is not so much need for a PCL: coherentization is a protection against reorganizing processes which are largely irrelevant to a pre- threshold memory network. In a complex, highly interconnected memory, reorganization is necessary to improve associativity, but in a memory with very few connections, there is unlikely to be any way of significantly improving associativity. Furthermore, even if the pre- threshold memory did have need of a PCL, it would not have the ability to run the loop through many iterations: this requires each proposed coherentization to be "tested" with numerous different connections. But if few connections are there in the first place, this will only very rarely be possible.
So, in order for the cognitive end of the loop to work properly, one needs a quality associative memory. A phase transition in associative memory paves the way for a sudden emergence of consciousness. This is a speculative story, yes, but unlike Jaynes' account it relies on precise models of what is going on inside the brain. The plausibility of the story could be tested by computer simulations in a straightforward way - - one context in which this might be done is briefly described in the final section.
It is worthwhile to contrast this view with Dennett's theory of consciousness and its evolution, briefly mentioned in the Introduction. Dennett wants to view consciousness as a social behavior rather than an innate property of the brain. He draws on Richards Dawkins' idea of a meme - - a sociocultural pattern, passed along from generation to generation just like a biological trait. Dennett believes that
Human consciousness is itself a huge complex of memes (or, more exactly, meme- effects in brains) that can best be understood as the operation of a "von Neumannesque" [serial] virtual machine implemented in the parallel architecture of a brain that was not designed for any such activities. The powers of this virtual machine vastly enhance the underlying powers of the organic hardware on which it runs, but at the same time many of its most curious features, and especially its limitations, can be explained as the byproducts of the kludges that make possible this curious but effective reuse of an existing organ for novel purposes.
Thinking of the streams of consciousness that permeate James Joyce's fiction, Dennett gives this "von Neumannesque" serial machine the alternate label "Joycean machine." And, subjectively, in most states of mind at any rate, consciousness does seem to flow like a stream rather than an ocean: all in one direction, one thought after another.
"I am sure you want to object," Dennett writes, that "[a]ll this has little to do with consciousness! After all, a von Neumann machine is entirely unconscious: why should implementing it ... be any more conscious?" But this objection does not faze him:
I do have an answer: The von Neumann machine, by being wired up from the outset that way, with maximally efficient informational links, didn't have to become the object of its own elaborate perceptual systems. The workings of the Joycean machine, on the other hand, are just as "visible" and "audible" to it as any of the things in the external world that it is designed to perceive - - for the simple reason that they have much of the same perceptual machinery focused on them.
Dennett's book contains many interesting explanations of various strange features of human consciousness (what he calls "kludges"). But what, at bottom, is his theory doing? His Joycean machine is a close cousin of my perceptual- cognitive loop: both are serially operating neural circuits which act largely by focusing perceptual machinery upon their own products. But whereas Dennett takes this virtual serial machine for the essence of consciousness, I take it to be a device for amplifying raw consciousness into the consciousness of objects. This is a crucial distinction.
What then becomes of the "consciousness is a collection of memes" idea? Clearly, the wild child raised without social contact still has not only raw consciousness, but some degree of consciousness of objects (the same degree, at least, possessed by non- human mammals). What he is missing, I suggest, is the ability to subvert the neural circuitry evolved for the consciousness of objects, and focus this circuitry on abstract patterns, on e.g. linguistic and social forms. This, I suggest, is the meme collection; not consciousness of objects itself. But this leads us on to the next section....
5. SUBVERTING THE PERCEPTUAL/COGNITIVE LOOP
So far we have focused on the consciousness of objects. But the theory of consciousness as randomness also has something to say about a different manifestation of raw consciousness: the state of awareness achieved through intensive meditation, which has sometimes been called "consciousness without an object."
The very indescribability of the meditative state has become a cliche'. The Zen Buddhist literature, in particular, is full of anecdotes regarding the futility of trying to understand the "enlightened" state of mind. Huang Po, a Zen master of the ninth century A.D., framed the matter quite clearly:
Q: How, then, does a man accomplish this comprehension of his own Mind?
A: That which asked the question IS your own Mind; but if you were to remain quiescent and to refrain from the smallest mental activity, its substance would be seen as a void - - you would find it formless, occupying no point in space and falling neither into the category of existence nor into that of non- existence. Because it is imperceptible, Bodhidharma said: 'Mind, which is our real nature, is the unbegotten and indestructible Womb; in response to circumstances, it transforms itself into phenomena. For the sake of convenience, we speak of Mind as intelligence, but when it does not respond to circumstances, it cannot be spoken of in such dualistic terms as existence or nonexistence. Besides, even when engaged in creating objects in response to causality, it is still imperceptible. If you know this and rest tranquilly in nothingness - - then you are indeed following the Way of the Buddhas. Therefore does the sutra say: 'Develop a mind which rests on no thing whatever.'
The present theory of consciousness suggests a novel analysis of this state of mind that "rests on no thing whatever." Consider: the perceptual/cognitive loop, if it works as I have conjectured, evolved for the purpose of making percepts cohesive. The consciousness of objects is a corollary, a spin- off of this process. Consciousness, raw consciousness, was there all along, but it was not intensively focused on one thing. Meditative experience relies on subverting the PCL away from its evolutionarily proper purpose. It takes the intensity of consciousness derived from repeated iteration, and removes this intensity from its intended context, thus producing an entirely different effect.
This explains why it is so difficult to achieve consciousness without an object. Our system is wired for consciousness with an object. To regularly attain consciousness without an object requires the formation of new neural pathways. Specifically, I suggest, it requires the development of pathways which feed the perceptual end of the perceptual/cognitive loop random stimuli. Then the perceptual end will send messages to the cognitive end, as if it were receiving structured stimuli - - even though it is not receiving any structured stimuli. The cognitive then tries to integrate the random message into the associative memory - - but it fails, and thus the perceptual end makes a new presentation. And so on, and so on.
What happens to the novice meditator is that thoughts from the associative memory continually get in the way. The cognitive end makes suggestions regarding how to coherentize the random input that it is receiving, and then these suggestions cycle around the loop, destroying the experience of emptiness. Of course these suggestions are mostly nonsense, since there is no information there to coherentize; but the impulse to make suggestions is quite strong and can be difficult to suppress. The cognitive end must be trained not to make suggestions regarding random input, just as the perceptual end must be trained to accept random input from sources other than the normal sensory channels.
Let me not be misunderstood: inherently, consciousness without an object has nothing to do with the PCL. The essence of the meditative state is the experience of a large quantity of randomness - - randomness not associated with any particular structure, but rather isolated from all structure, experienced as an entity unto itself: pure formlessness, pure incomprehensibility. My aim in this section has been to uncover the means by which this state of mind can be attained by the human organism. Other organisms could conceivably achieve the same state by different mechanisms.
Creative Inspiration
Another intriguing variety of human consciousness, closely related to the meditative state, is the condition of creative inspiration. Many highly creative thinkers and artists have described the role of consciousness in their work as being very small. The biggest insights, they have claimed, always pop into the consciousness whole, with no deliberation or decision process whatsoever - - all the work has been done elsewhere. But yet, these sudden insights always concern some topic that, at some point in the past, the person in question has consciously thought about. A person with no musical experience whatsoever will never all of a sudden have an original, fully detailed, properly constructed symphony pop into her head. Someone who has never thought about physics will not wake up in the middle of the night with a brilliant idea about how to construct a unified field theory. Clearly there is something more than "divine inspiration" going on here. The question is: what is the dynamics of this subtle interaction between consciousness and the unconscious?
In the present theory, there is no rigid barrier between consciousness and the unconscious; everything has a certain degree of consciousness. But only in the context of an iterative loop does a single object become fixed in consciousness long enough that raw consciousness becomes comprehensible as consciousness of that object. The term "unconscious" may thus be taken to refer to those parts of the brain that are not directly involved in a consciousness- fixing peceptual/cognitive loop.
Hadamard's classic The Psychology of Invention in the Mathematical Field (1954) makes a strong case for the prevalence of unconscious creativity in mathematical research. But perhaps the most striking description of this feeling of "inspiration" was given by Friedrich Nietzsche in his autobiographical essay Ecce Homo (1888):
Has anyone at the end of the eighteenth century a clear idea of what poets of strong ages have called inspiration? If not, I will describe it. - - If one has the slightest residue of superstition left in one's system, one could hardly resist altogether the idea that one is merely incarnation, merely mouthpiece, merely a medium of overpowering forces. The concept of revelation - - in the sense that suddenly, with indescribable certainty and subtlety, something becomes visible, audible, something that shakes one to the last depths and throws one down - - that merely describes the facts. One hears, one does not seek; one accepts, one does not ask who gives; like lightning, a thought flashes up, with necessity, without hesitation regarding its form - - I never had any choice....
Everything happens involuntarily in the highest degree but as in a gale a feeling of freedom, of absoluteness, of power, of divinity. - - The involuntariness of image and metaphor is strangest of all; one no longer has any notion of what is an image or metaphor: everything offers itself as the nearest, most obvious, simplest expression....
What Nietzsche describes here is an extreme manifestation of a very common phenomenon. But what he does not tell us in this passage is the number of years he spent consciously working on his philosophy and his style, seeking powerful images and metaphors, struggling with difficult thoughts. In literature and philosophy, as in mathematics and science, one must struggle with forms and ideas, until one's mind becomes at home among them; or in other words, until one's consciousness is able to perceive them as unified wholes. Once one's consciousness has perceived an idea as a coherent whole - - then one need no longer consciously mull over that idea. The idea is strong enough to withstand the recombinatory, self- organizing dynamics of the unconscious. And it is up to these dynamics to produce the fragments of new insights - - fragments which consciousness, once again entering the picture, may unify into new wholes.
Without the perceptual/cognitive loop to aid it, the unconscious would not be significantly creative. It would most likely recombine all its contents into a tremendous, homogeneously chaotic mush ... or a few "islands" of mush, separated by "dissociative" gaps. But the perceptual/cognitive loop makes things coherent; it places restrictions on the natural tendency of the unconscious to combine and synthesize. Thus the unconscious is posed the more difficult problem of relating things with one another in a manner compatible with their structural constraints. The perceptual/cognitive loop produces wholes; the unconscious manipulates these wholes to produce new fragmentary constructions, new collections of patterns. And then the perceptual/cognitive loop takes these new patterns as raw material for constructing new wholes.
So, what is the relation between the creative state and the meditative state? Instead of a fixation on the void of pure randomness, the creative condition is a fixation of consciousness on certain abstract forms. The secret of the creative artist or scientist, I propose, is this: abstract forms are perceived with the reality normally reserved for sense data. Abstract forms are coherentized with the same vigor and effectiveness with which everyday visual or aural forms are coherentized in the ordinary human mind. Like the meditative state, the creative state subverts the perceptual/cognitive loop; it uses it in a manner quite different than that for which evolution intended it.
One may interpret this conclusion in a more philosophical way, by observing that the role of the perceptual/cognitive loop is, in essence, to create reality. The reality created is a mere "subjective reality," but for the present purposes, the question of whether there is any more objective reality "out there" is irrelevant. The key point is that the very realness of the subjective world experienced by a mind is a consequence of the perceptual/cognitive loop and its construction of boundaries around entities. This means that reality depends on consciousness in a fairly direct way: and, further, it suggests that what the highly creative mind accomplishes is to make abstract ideas a concrete reality.
This has interesting philosophical implications. Recall that, if one accepts the statistical version of the quantum theory of measurement, one must also accept that there is no fundamental difference between mind and reality. Both are just collections of observable patterns. The fundamental question is then how one observable pattern affects another - - this is what the dynamical laws of quantum physics address; and, in a different context, it is what the laws of psychology address as well (e.g. Peirce's "one law of mind"). The perceptual/cognitive loop, seen in this extremely abstract light, is a means by which patterns can protect themselves from being affected by other patterns. What I am proposing, phenomenologically, is that it is this kind of protection which is responsible for the apparent solidity of the "real world," as opposed to the fluidity of thoughts and emotions. From this perspective, the "making real" of abstract ideas is not a metaphor but a literal statement: in the subjective world of the creating mind, artistic and conceptual constructions are solid and real in the same sense that chairs, mountains and bodies are solid and real. This may seem to be an extreme conclusion - - but it does an admirable job of explaining the experience of creative inspiration as described in the Nietzsche quote given above.
Getting back to Dennett's theory, it is this kind of subversion, I suggest, that is the memetic aspect of consciousness. This kind of subversion is what the wild child is inevitably missing; it is what we learn from our parents in early childhood. The biological circuitry is there to amplify raw consciousness into the consciousness of objects; but it is up to society to modify this circuitry to focus on the ideal world.
7. COMPUTER CREATIVITY
This approach to creativity has interesting implications for artificial intelligence (AI). The question "how can we make computers more creative?" has been asked many times. But it has never been placed in the context of an adequate general theory of creativity and consciousness.
The root of the problem is of course the Church- Turing Thesis, which states that everything which can be precisely formulated, can be simulated on a simple digital computer. The only catch is that the computer must be supplied with a potentially infinite amount of memory, and must be allowed to run the simulation for an arbitrarily long period of time. This would seem to imply that computers can be creative - - at very least, to the same extent that humans can. But some (e.g. Dreyfus, 1978) have claimed otherwise; they have argued that human creativity cannot be precisely formulated, and hence is not susceptible to the Church- Turing Thesis.
I believe that there is a grain of truth in this anti- computationalist assumption - - but only a grain. The key point, if one believes the randomness theory of consciousness, is the distinction between the random and the pseudorandom. Digital computers are capable of pseudo- randomness - - as exemplified by the "random number generators" supplied in most programming languages - - but they can never display truly random behavior. Thus quantum physics would appear to contradict the Church- Turing Thesis - - for though the dynamical equations are computable to within any specified accuracy, the truly random choice involved in measurement cannot be simulated on any computer. But in fact, this contradiction is only apparent. It can never be empirically verified, because there is no way to distinguish true randomness from pseudorandomness. And this is true in principle, mathematically, as well as in practice; as Gregory Chaitin (1987) has shown, it is a consequence of Godel's Incompleteness Theorem (for the reason that no formal system can prove the randomness of a proposition whose "algorithmic information" exceeds the algorithmic information of the formal system). "Random," from the point of view of any particular system, just means "containing no patterns recognizable by me."
This provides an intriguing twist on the philosophical problem of "other minds." How can we know the extent to which another person is conscious at some particular moment? We can't - - because, to do this we would have to distinguish the true randomness of consciousness from pseudorandomness. And there is no sure way to make this distinction. Thus the insolubility of the other minds problem is a corollary of the inability to recognize randomness, which is a form of Godel's Theorem! Seen in the light of the randomness theory of consciousness, Godel's Theorem and the other minds problem are one and the same!
How to Program Creativity
Given a computer with sufficiently fast or numerous processors, one could achieve humanlike creativity by programming the dynamical equations of the brain. But this is a task for the AI of the future. Contemporary AI is, like politics, "the art of the possible": it requires the program designer to make up in cleverness what he or she lacks in processing power. The brain works largely by a logic of redundancy: often, as in motion detection, the same calculation is made thousands of times with terrible accuracy, and the average result is taken as the correct answer (Pribram, 1991). AI programs cannot afford to be redundant to this extent; they must use "short- cuts" of one kind or another.
Applied to creativity, what does this mean? In the human brain, perceptual/cognitive loops emerge in context; they are part and parcel of the rest of brain structure. But the AI programmer cannot allow so much to evolve by trial and error. It makes sense, given the practical limitations of contemporary computers, to program in the perceptual/cognitive loop. As a first approximation, one envisions a three- level parallel architecture. Level 1 is a collection of feature- detecting processors, which recognize primitive patterns in sensory input. Level 2 contains processors which attempt to group these patterns into coherent wholes. And finally, Level 3 contains an associative memory; it stores the wholes which Level 2 has recognized. The interaction between Levels 2 and 3 follows the logic of the perceptual/cognitive loop: that is, Level 2 sends Level 3 its conjectural wholes, and Level 3 tries to classify these wholes on the basis of its associative memory. If a certain proposed coherent entity is more easily classifiable based on its parts than based on its overall structure, it is sent down again for further coherentization. But if this process repeatedly fails, then it is accepted that no more coherentization is possible; Level 3 takes the "whole" provided to it, and classifies it based on its parts.
This three- level scheme is extremely general; but this generality is necessary. Different sensory modalities require different sorts of coherentization processes. And memory also is not single but multiple - - the brain contains a number of different "memory systems," each storing things according to different principles (Pribram, 1991). The construction of an effectively functioning AI system along these lines will undoubtedly require a certain amount of engineering. One must select algorithms for the different levels, based not only on their intrinsic merits, but also on how well they will work together (a problem which the brain presumably solves by redundancy and natural selection of neural pathways).
The most obvious context in which to flesh out this idea is vision processing. Indeed, a three- level parallel architecture for vision processing has already been developed (Levitan et al, 1987), the levels of which correspond roughly to the three levels described above. However, the dynamics which have been programmed for this architecture have nothing to do with the perceptual/cognitive loop; they are a good bit simpler in nature. And unfortunately, the results obtained with this architecture, while acceptable, have been far short of spectacular. Because of the huge size of vision processing problems - - the immense number of bits needed to store a single picture, or a single visual feature - - it is difficult to tell whether the limitations of the architecture are due to a) the shortcomings of the three- level architecture, b) the shortcomings of the dynamics imposed on the architecture, or c) merely the insufficient size of the machine as opposed to the memory and speed requirements of vision processing.
So, despite the attractiveness of vision processing, there is a strong argument for starting with something less computationally demanding - - say the processing of music. In this case the sensory input would simply be series of notes. Level 1 would recognize what are known as "motifs" - - small patterns of rising or falling notes, implicit harmonies, and so forth. Level 2 would attempt to group these motifs into melodies - - a melody being the minimum musical unit that is heard with significant intuitive wholeness. Musicologists have determined a number "rules of thumb" for grouping motifs into melodies (which are particularly simple in the context of non- Western music; see Hughes, 1991). And Level 3 would contain a memory bank of melodies, each one stored with links to other melodies that are significantly structurally related to it (e.g. melodies with similar contour, melodies in the same key, melodies with shared motifs, etc.).
Level 3 would judge the quality of a conjectural melody proposed by Level 2 by comparing the melody with the others in its memory. For each conjectural melody, it would ask: does this melody have enough overall links with other melodies, as opposed to mere links between its component parts and other melodies? Does it have enough links within itself that the memory will cause it to "reproduce itself" instead of fragmenting it? And it would answer these question by randomly acting on the conjectural melody - - e.g. by mutating it and determining the number of steps required to change it into another melody, or by operating on it with certain transformations (e.g. changing the key) to see if it can be made more similar to another melody without sacrificing its essential structure.
One should not misunderstand the purpose of such an experiment: although the construction of such a program may be guided by a theory of consciousness, the program's own degree of consciousness will still be quite minimal. Compared to the brain, there is very little structure involved. But the point is that, even lacking the technology to simulate human consciousness and creativity, we can still simulate some of the dynamics which surround and facilitate these processes. This project should produce interesting AI programs, and it may also provide significant insights into the functioning of the human processes being modeled.
8. CONCLUSION
Peirce derived the randomness theory of consciousness from his philosophical system. Here, on the other hand, I have attempted to show that the randomness theory of consciousness can be useful; that it has interesting things to say about the role of consciousness in concrete contexts. By judicious application of the randomness theory, I have shown, the quantum theory of consciousness is stripped of much of its mystery; the neuropsychology of ordinary and altered states of consciousness is given a collection of unprecedentedly far- reaching and definite hypotheses; and the psychology of human and computer creativity is given a novel, solid foundation. This is not an exhaustive list of possible applications of the randomness theory; but it should be adequate for a start.
In itself, the randomness theory of consciousness is not susceptible to empirical proof - - it is a philosophical theory. However, the neuropsychological, physical, phenomenological and computational conjectures given here, and directly inspired by the randomness theory, are just as testable as any other scientific hypotheses. It is by the success of these scientific hypotheses, and any others which follow them, that the utility of the randomness theory as a whole should be gauged.
No other theory has yet been able to account for the various phenomena associated with consciousness in such a thorough, consistent way. Thus it seems reasonable to suggest that, if we are to work toward a unified scientific theory of consciousness, the path to follow is the one laid out by Peirce over a century ago: consciousness as absolute, raw spontaneity, expressed in the world of structure as randomness.
NOTE
This paper owes something to the constructive criticisms of two people who were kind enough to read an earlier draft: Kent Palmer and, especially, Allan Combs. Needless to say, this does not imply that either of them accepts all of the ideas proposed here.
REFERENCES
Bateson, G. (1980). Mind and Nature: A Necessary Unity, Bantam, NY
Bisiach, E. and A. Berti (1987). "Dyschiria: an Attempt at its Systematic Explanation," in Neurophysiological and Neuropsychological Aspects of Spatial Neglect, Ed. by M. Jeannerod, North- Holland, Amsterdam
Bisiach, E., S. Meregalli and A. Berti (1985). "Mechanisms of Production- Control and Belief- Fixation in Human Visuo- Spatial Processing: Clinical Evidence from Hemispatial Neglect." Paper presented at Eighth Symposium on Quantitative Analysis of Behavior, Harvard University, June 1985
Chaitin, G. (1987). Algorithmic Information Theory, Addison- Wesley, NY
Dennett, Daniel (1991). Consciousness Explained, Little, Brown and Co., Boston
Deutsch, D. (1985). "Quantum Theory, the Church- Turing Principle and the Universal Quantum Computer," Proc. R. Soc. London A 400, pp. 97- 117
Edelman, G. (1991). The Remembered Present, Basic, NY
Goertzel, B. (1991). "Quantum Theory and Consciousness," Journal of Mind and Behavior
Goertzel, B. (1993). The Structure of Intelligence, Springer- Verlag, NY
Goertzel, B. (1993a). The Evolving Mind, Gordon and Breach, NY
Goertzel, B. (1993b). "Phase Transitions in Associative Memory Networks," Minds and Machines
Goertzel, B. (1994). Chaotic Logic, Plenum Press, NY
Goswami, A. (1990). "Consciousness in Quantum Physics and the Mind- Body Problem," Journal of Mind and Behavior 11- 1
Hadamard, Jacques (1954). An Essay on the Psychology of Invention in the Mathematical Field, Dover, New York
Hartmann, E. (1991). Boundaries in the Mind, Basic NY
Hebb, D.O. (1949). The Organization of Behavior.
Hughes, David (1991). "Grammars of Non- Western Musics: A Selective Survey," in Representing Musical Structure, Ed. by Howell, West and Cross, Academic, New York
Jaynes, Julian (1976). The Origin of Consciousness in the Breakdown of the Bicameral Mind
Kawabata, N. (1986). "Attention and Depth Perception," Perception 15, pp. 563- 572
Mandler, G. (1985). Cognitive Psychology: An Essay in Cognitive Science, Erlbaum Press, Hilldale NJ
Nietzsche, F. (1888/1969). Ecce Homo, English translation by Walter Kauffmann, Random House, NY.
Peirce, Charles S. (1935). Collected Works of Charles S. Peirce, Volume 8: Scientific Metaphysics. Cambridge MA: Harvard University Press
Peres, Asher (1986). "When Is a Quantum Measurement?", American Journal of Physics 54, 688- 692
Pribram, Karl (1991). Brain and Perception, Erlbaum, Hilldale NJ
Putnam, Hilary (1975). Mind, Language and Reality, Cambridge
University Press
Prinzmetal, W., D. Presti and M. Posner (1986). "Does Attention Affect Feature Integration?", J. Exp. Psychol.: Human Perception and Performance 12, 361- 369
Rizzolatti and Gallese (1988). "Mechanisms and Theories of Spatial Neglect," in Handbook of Neuropsychology v.1, Elsevier, NY
Rizzolatti, G., C. Scandolara, M. Gentilucci, and R. Camarda (1985) "Response Properties and Behavioral Modulation of 'Mouth' Neurons of the Postarcuate Cortex (Area 6) in Macaque Monkeys," Brain Research 255, pp. 421- 424
Treisman, A., and H. Schmidt (1982). "Illusory Conjunctions in the Perception of Objects," Cognitive Psychology 14, pp. 107- 141
Tsal, Y. and L. Kolbet (1985). "Disambiguating Ambiguous Figures by Selective Attention," Q.J. Exp. Psychology 37A, 25- 37
Umilta, C. (1988). "Orienting of Attention," in Handbook of Neuropsychology v.1, Elsevier, NY
Wheeler, J.A. (1980). "Delayed- Choice Experiments and the Bohr- Einstein Dialog," in American Philosophical Society, 9- 40
Wigner, E. (1962). Symmetries and Reflections, Indiana University Press, Bloomington IN