Patterns of Awareness
Ben Goertzel
February 22, 2004
A novel solution to the “problem of consciousness” is presented. The solution is grounded in Peircean philosophy and algorithmic information theory. It is panpsychist in the sense that it posits that every entity in every universe has awareness to some extent; but it provides a specific explanation for the fact that some entities have far more intense streams of awareness than others. The nature of qualia denoting temporal experience is also discussed; and implications for neuroscience and AI are briefly considered.
In a famous 1995 paper, David Chalmers summed up the essential philosophical “problem of consciousness” very well:
The really hard problem of consciousness is the problem of experience.
When we think and perceive, there is a whir of information-processing, but there
is also a subjective aspect. … When we
see, for example, we experience visual sensations: the felt quality of redness,
the experience of dark and light, the quality of depth in a visual field. …
Then there are bodily sensations, from pains to orgasms; mental images that are
conjured up internally; the felt quality of emotion, and the experience of a
stream of conscious thought.
… Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? ….Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
George Mandler (1975) made a similar point by distinguishing the “hot” and “cold” aspects of consciousness. The cold aspects of consciousness have to do with the structure of attention: how long do we tend to focus on what things, what kinds of cognitive operations do we carry out when we focus on something, etc. I have written on this topic in the context of Webmind, an AI system I worked on in the late 1990’s (Goertzel, 2000). The hot aspect of consciousness is what Chalmers calls “experience” – subjectivity – the “raw feel” of being, knowing and feeling.
I think Chalmers almost hit the mark with his formulation of the “hard problem,” but I think he made a crucial conceptual error that renders the problem, as he poses it, unsolvable. Here I’ll propose a different variation of the “hard problem,” and present a novel philosophical solution, which I believe accords well with known science as well as providing significant conceptual clarity.
There is reason to be somewhat skeptical of the approach of “reformulating the problem.” Reformulation can be used as an excuse for wimping out – the classic case is Daniel Dennett (1992), who proposed a theory of consciousness that basically “solved” the problem by assuming experience doesn’t exist. However, I don’t think my reformulation is a wimpy one. Rather, I think it’s gutsier than Chalmers’s formulation, in terms of confronting the sometimes counterintuitive nature of reality.
The problem is that Chalmers assumes “physical processing gives rise to [experience].” This presupposes that a solution to the problem of consciousness has to be of the form “Here is a mechanism that takes in physical processes and spits out experiences.” But I think a better formulation is, “Why are some physical processes closely associated with subjective experiences?” The difference may seem subtle but philosophically, it’s quite significant. Rather than posing physical processing as the causal agent, my reformulation opens the door to the hypothesis of a deeper realm of being that encompasses both physical and experiential reality, and explains their interrelationship.
A solution to the hard problem of consciousness in terms of a deeper realm may not feel as satisfying as a solution in terms of some “mystery mechanism” that creates experiences from particular physical phenomena. But I think it should be clear by now that this mystery mechanism won’t be found – the universe just doesn’t work that way.
To solve the problem of consciousness, one needs to shift context. Stop thinking about physical systems, and start thinking about patterns. Consider patterns as foundational: consider the universe as being made of patterns. Consider there as being a set of elemental items, and then patterns among these elemental items. Assume each elemental item has a finite set of finitely-describable properties.
This “pattern philosophy” perspective has many philosophical roots, from early-Wittgensteinian logical positivism to Machian psychologism, to the more explicitly pattern-based approaches of Charles Peirce, Gregory Bateson and Benjamin Whorf and late Nietzsche. But for the present purposes, we don’t need to explore the metaphysical nature of these assumptions too carefully, we can simply proceed with them and see where they lead.
What is a pattern? A pattern in some entity X is a function f that computes X from some data D, with the property that
simplicity(f) + simplicity(D) < simplicity(X)
The entity X is the “ground” of the pattern. In other words, a pattern is a simplification, a representation-as-something-simpler (Goertzel, 1997). This notion of pattern is closely tied to algorithmic information theory (Chaitin, 1992).
To define pattern, one must therefore already have a notion of “simplicity.” We assume this is elemental and given – more will be said about this later.
Now, suppose one has a dynamical universe, in which new patterns are continually occurring. In this context, I posit a general principle of consciousness:
Furthermore, I propose that each quale may be associated with one or more qualities of “intensity,” and that these “intensity” qualities are related to such things as:
I propose that patterns providing massive simplification of the most complex (non-simple) grounds are associated with the most intense qualia.
This ties in closely with the philosophy of Charles S. Peirce (1982), who divided the universe into three categories:
What I have proposed so far, in Peircean terms, is firstly that new instances of Thirds are associated with new Firsts. Furthermore – going beyond anything Peirce said or hinted -- I have proposed a connection between qualities of Thirds (complexity, relative simplification) and qualities of Firsts (experienced intensity).
How does this relate to Chalmers’ original problem, which was posed in terms of physical processes? In the pattern philosophy, physical processes are particular sorts of patterns. Electrons and protons and baseballs are understood as patterns in observed data. It’s easy enough to say that some of these patterns have more raw awareness associated with them than others. But this observation, while true, is not very satisfying. To go deeper we need to introduce the notion of system-relative simplicity and embodied qualia.
There is one more step to be taken, to connect this abstract perspective on qualia with everyday human experience. This is the notion of embodied qualia – qualia that are embodied in particular physical systems.
The first step toward embodied qualia is the notion of relative simplicity. Take an individual system S – say a human being, a turtle, a computer program, or a rock. One may associate a simplicity measure with this system S, by appending the current state of S to the data D considered above in the definition of pattern. That is, a function f is a pattern in a ground X relative to S if:
Note that the simplicity of S is not tabulated here – it’s left implicit. That is because we’re defining simplicity relative to S, taking S as a given.
We may then define qualia relative to S, as qualia associated with patterns that
In other words, if we have a pattern in S, and it provides new information (new simplification value) relative to what’s already in S, then this pattern contributes to the fund of qualia that have meaning relative to S.
But this concept doesn’t quite go far enough. For instance, suppose there’s an obscure pattern relating the timings of the firings of neurons in scattered parts of a certain human brain – but this pattern is never “noticed” by the rest of the brain, in the sense that it has no impact on its overall dynamics. Is it really the case that this pattern is a quale of the mind associated with that brain?
To deal with this issue we need to introduce the distinction between disembodied and embodied qualia. This notion may seem a little counterintuitive at first, but I believe it is a sound one. Remember, we are looking at reality differently from the usual way – we are taking a perspective in which patterns, rather than physical systems, are primary.
Disembodied qualia are experiences that are just “out there” in qualia-space: they’re not being experienced by anybody or anything.
For a quale to be experienced by something, I hypothesize, it needs to correspond to a pattern that is actually recognized by some system.
What does it mean for a pattern to be recognized by a system? Recall the definition of pattern: a (process, data) pair (f, D) is a pattern in an entity X if (f,D) is simpler than X, and executing f on data D produces X. Recognizing the pattern (f,D) in the entity X means that the system actually creates components corresponding to f and D, and then executes the f-component on the D-component to produce X. A pattern that is recognized by a system, in this sense, corresponds to a quale that is “embodied in” the system.
We then have a hierarchy of quale types:
Of course, the use of the English words “potential” and “embodied” here shouldn’t be overinterpreted.
In this perspective, it’s not the case that physical processes somehow cause qualia to come about. Rather, there is a fundamental connection between patterns and qualia, which is manifested in physical systems because physical systems are a particular sort of pattern. Human experiential reality has to do with the swarm of qualia associated with human brains – and in particular with the qualia that are embodied in human brains. Human experience and human brain dynamics are viewed as two associated aspects of an underlying reality, which is composed of patterns/qualia.
Applied to the human brain, this means that a human brain experiences a quale when a new pattern occurs in it, and the brain recognizes this pattern. If this pattern is particularly striking – particularly surprising based on the information already implicit in the brain – then it’s a particularly intense quale.
This theory is neutral as regards the particular neural mechanisms underlying human consciousness, or the mechanisms that must be put into place to make an AI system conscious. What it states is simply that the brain processes associated with consciousness will be the brain processes associated with representing newly perceived or conceived patterns, enacting new patterns, or taking pattern-codes out of memory and reassembling them into patterns once again.
The concept of embodied qualia allows us to explore the relationship between qualia and time. In a sense qualia live outside of time; yet we also experience qualia that involve the experience of time passing. A quale related to time passing corresponds, I suggest, to a pattern of change that’s recognized within a system. If a system recognizes a pattern that involves a change from one moment of physical time to the next, then, the embodied quale that results involves a “feeling of change.”
This leads to the fascinating question of “why time moves forwards.” Physicists have pondered the question of why macroscopic physical time moves forwards, while on the quantum level past and present seem perfectly symmetric – Julian Barbour’s book The End of Time (1999) contains some fascinating reflections on this subject. The same question exists on the psychological level, but is easier to resolve. Temporal qualia are not intrinsically either forward-going or backward-going. However, intelligent systems tend to be more often concerned with trying to predict the future from the past than vice versa. Therefore, the temporal qualia that are embodied in systems tend to have a forward-going flow to them. On the other hand, the disembodied qualia associated with a system will quite possibly be just as frequently backward-going as forward-going.
OK, you might say – all that’s very well, but even if I accept it, there’s still a “hard problem” left; actually two of them:
Both of these are meaningful issues, but both are – I believe -- addressable in reasonably conceptually satisfying ways.
Regarding the first question, my answer is that there is no real need to consider qualia as separate from patterns. This distinction is a strange offspring of our language and our cultural-philosophical systems. Patterns exist, and part of this existence is what we refer to as “raw feel” or “experience-ness” or “quale-ness.” Basically, this is a kind of “panpsychist” answer, which asserts that everything is aware to some extent. But a key point is that while all patterns embody awareness, some patterns embody more awareness than others. Humans are more intensely aware than rocks – and this is because we are intensely pattern-recognizing creatures, whereas rocks are not.
The second question is a bit harder to grapple with. If one views the universe as made of patterns, then one runs up against the fundamental problem that pattern relies on simplicity – and one concludes that one cannot perceive or conceive a universe at all unless one begins with some innate intuition regarding which entities are simpler than others. This is closely related to the problem of induction. As Hume observed and modern probability theory has verified in detail, in order to predict the future from the past, one must make some kind of a priori assumption – otherwise there are just too many possible explanations of the past, making wildly different predictions regarding the future. The most elegant kind of a priori assumption to make is to impose a simplicity measure on the space of models of the past – which is equivalent to defining a simplicity measure so that one may formally state what functions are patterns in the past. This is the approach taken in Marcus Hutter’s (2003) recent mathematical theory of general intelligence, a deepening of Solomonoff’s (1964) pioneering work in the same vein.
So, at best, I have “solved” the hard problem of consciousness by reducing it to the problem of simplicity measure relativity – just as Solomonoff “solved” the problem of induction by reducing it to the problem of simplicity measure relativity. Of course, this parallel is fundamental, not coincidental. A simplicity measure is nothing more or less than a projection of unordered entities into a partially ordered (or better yet, fully ordered) domain. In Nietzschean terms, it is a ranking. Without assuming some ranking of entities in the universe – some “order of rank” -- we can’t see patterns, we can’t talk about qualia, we can’t do anything.
A question then becomes: what makes one ranking better than another? This is a normative issue rather than a logical issue. One attractive answer is: the complexity of the space of patterns that ensues if one assumes the ranking. If a ranking leads to a large, diverse population of patterns then it’s a good one.
Different rankings correspond, in a sense, to different universes. The simplicity-measure dependence of qualia may then be reformulated as the statement that each quale belongs to some particular universe. Just as each physical process or system belongs to some particular universe (which in turn “exists” within the larger multiversal space of possible universes).
Many other questions lurk here – for example, the emotional valence of qualia, and the relationship between qualia and free will. These are important, and I believe they can be explored within this framework, but I will do so elsewhere. My goal in this essay has been merely to propose a philosophical solution to the “hard problem of consciousness” as I conceive it.
I realize that my solution will not please everyone. However, if you’re displeased, I urge you to think hard about what sort of solution will ever be possible. No one is going to find an answer to “why experience exists” any more than one can find an answer to “why patterns exist.” These questions are too basic to be answered. Also, no one is ever going to find an answer to the precise question Chalmers has posed, namely “how do physical processes give rise to conscious experiences” – because in fact this is not what happens. Experience exists at a more basic level than physical systems, and experiences are associated with physical systems because of principles that have to do with the underlying nature of reality, at a level deeper than the level at which the physical/non-physical distinction exists. What I have proposed here is a concrete and formal theory of this “deeper level,” and an explanation of the association between awareness and physical processes in terms of this theory.
Empirical exploration of the theory presented may be carried out by observing the extent to which intensity of perceived qualia corresponds with intensity of patterns in brains measured relative to brains. Of course, this general concept may be pinned down in many different ways, but that doesn’t make it unscientific – it just makes it more of a “research programme” than a single scientific theory.
In artificial intelligence terms, the present theory suggests that if an AI program is constructed so that its dynamics give rise to a constant stream of patterns that are novel and significant (measured relative to the system itself), then this program will report experiences of awareness and consciousness somewhat similar to those that humans report.
· Barbour, Julian (1999). The End of Time. Weidenfeld & Nicholson
· Chaitin, Gregory (1992). Algorithmic Information Theory. Cambridge University Press.
· Chalmers, David (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies 2(3):200-19, 1995
· Dennett, Daniel (1992). Consciousness Explained. Viking.
· Goertzel, Ben (1997). From Complexity to Creativity. Plenum Press.
· Goertzel, Ben (2000). The Dynamics of Consciousness in the Webmind AI Engine. Dynamical Psychology,
http://goertzel.org/dynapsyc/2000/ConsciousnessInWebmind.htm
· Hutter, Marcus (2003). A Gentle Introduction to the Universal Algorithmic Agent AIXI, to appear in Goertzel and Pennachin, Editors, Artificial General Intelligence
· Mandler, G. (1975). Mind and Emotion. Wiley.
· Peirce, Charles S. (1982). Collected Works Volume 5. Indiana University Press.
· Solomonoff, Ray (1964). A Formal Theory of Inductive Inference, Part I and Part II, Information and Control, Part I: Vol 7, No. 1, pp. 1-22, March 1964; Information and Control, Part II: Vol. 7, No. 2, pp. 224-254, June 1964.