Intelligence, Mind

and Self-Modification:

Defining the Core Concepts of AI

 

Ben Goertzel

March 2002

 

 

 

1.  Introduction

 

This document presents some rough informal definitions of several critical concepts in AI and psychology: intelligence, mind, and self-modification. 

 

These definitions can also be formalized mathematically, but this formalization involves a number of arbitrary decisions, and I am not sure how much it adds, since at present no calculations or proofs have been done using the formal definitions.

 

 

2.  Pattern and Emergence

 

We begin with the notion of pattern.   A pattern is defined as a representation as something simpler. 

 

That is, a process P is a pattern in an entity X if  … P (approximately at least) produces X, and P is simpler than X. 

 

Note that this definition of pattern relies on two other concepts:

 

·       Similarity (how approximate is “approximate”?)

·       Simplicity

 

Mathematically, the notion of pattern has a lot to do with a branch of math known as “algorithmic information theory,” pioneered by Gregory Chaitin.

 

Conceptually, the idea is: to produce X but be simpler than X, P must capture some kind of regularity in the makeup of X.

 

Since similarity and simplicity both come in degrees, so does pattern.  P can be a pattern in X to a greater or lesser degree.  One can consider this degree to lie between 0 and 1.

 

If you have an entity X – say a picture, or a person, or a document, or an idea – you can theoretically construct the set of all patterns in X.   This assumes that you have settled on a way of measuring similarity and simplicity.  The set of all patterns in X is what I call the structure of X.

 

If X changes over time, there are a couple ways of thinking about the set of patterns in X.  You can look at the set of patterns in X at a given time.  Or you can look at the set of patterns in X as it changes over time.  Or both sets merged together.

 

Having defined pattern, we can then define emergence.  If we have two things, X and Y, then the emergence generated by X and Y is defined as the amount of pattern that’s there when you put X and Y together, in addition to the pattern that’s there in X and Y considered individually.

 

Of course, this doesn’t capture all shades of meaning of the natural language concept of emergence.  No reasonably simple mathematical definition could.  What my definition of emergence captures is the degree to which, when you put X and Y together, “the structure of the whole is greater than the structure of the parts.”

 

 

3.   Intelligence.

 

Let us begin with the pragmatic conception of intelligence given in (Goertzel, 1993), which says that

 

Intelligence is the ability to achieve complex goals in a complex environment

 

What this says is:

 

·       All else equal, the greater the total complexity of the set of goals that the organism can achieve, the greater the system’s intelligence

·       All else equal, the greater the complexity of the set of environments in which these goals are achieved, the greater the system’s intelligence

 

There are various mathematical equations that can be formulated to embody this definition.  These equations define “degree of intelligence” as a function of the quantities

 

 

for the various possible goals G and environments E.

4.   Mind

Mind is an even more controversial and ambiguous notion than intelligence, so that attempting to give it a formal definition may seem an unforgivable act of hubris.  But again, we must caution that we do not aim to fully capture the natural language notion of “mind.”  Rather, our goal is to arrive at a pragmatic working definition.

On a philosophical level, we believe that mind may be viewed on different levels. 

There is the experiential view, the view of the stream of raw consciousness – from this point of view mind is unanalyzable, simply present. 

There is the physical, mechanistic view, which is how neurophysiologists typically see the human mind, and how Webmind programmers think of each individual part of Webmind as they construct and test its details. 

There is the point of view of pattern and relationship.  From this perspective, mind is a web of relationships, of patterns.  Many philosophers have taken this perspective, using different languages to describe roughly similar ideas.  Charles S. Peirce viewed mind as a network of habits, each one extending itself over the other habits that it related to.  Buddhist psychology views mind as karma, as a collection of accumulated habit-patterns, all building on each other to construct a unified inner world.  Nietzsche viewed mind as a field of dynamic quanta, each one extending itself over other quanta to which it is related.  Goertzel (1994), in a similar spirit, portrays mind as a web of patterns – a dynamic web, continually rebuilding itself by a dynamic in which each component, each pattern, continually modifies the other patterns that it’s related to. 

There is also the point of view of synergy and emergence – in which mind is viewed as a system of relations which has its own wholeness, not predictable from any of the relations in particular.  Synergy in Novamente occurs on three different levels: within an individual module devoted to a specialized task like language processing or reasoning; and among the different modules, in the form of intelligence-boosting inter-module interactions; and in the form of emergent patterns between Novamente and its environment other intelligent systems.

How does one boil these multiple levels down into a formal definition?   Suppose we have an intelligent system S.   Then, as a first stab, we may define

The internal aspect of S’s mind is the set of patterns in S

The external aspect of S’s mind is the set of patterns emergent between S and its environment

These definitions focus on the relational and emergent views of mind.  They do not touch the experiential level.  And the physical level is only implicit, because it is assumed there is some physical system S that is operating and hence leading to these patterns that are part of S’s mind. 

The only thing missing from these definitions is an appropriate way of quantifying fuzzy membership in the mind.  Not all patterns in an intelligent system are equally “mind-ish”, even if they all do contribute to the system’s intelligence to some extent.   For each pattern P considered in S’s mind’s internal or external aspects, one needs to quantify how much P is actually correlated with S’s intelligent behavior. 

 

4.  A Simple But Prohibitively Inefficient AI System

If one lifts the requirement of computational efficiency, then the task of artificial intelligence design becomes remarkably easy.  Whether this observation is of any pragmatic interest for AI design is a matter of opinion.  There are those who believe that the best approach to AI is to begin with a simple approach that works under the assumption of near-infinite computational resources, and incrementally make it more efficient.  There are others who believe that the assumption of near-infinite computational resources brings one into a fantasyland of no relevance to practical system design. 

 

The main problem, of course, is that these mathematical observations do not come close to demonstrating that AI is physically possible.  We have the human brain as proof that a certain level of intelligence is physically possible, but we do not know for sure that the human brain is a computational entity – there is still the off chance that it uses some kind of mysterious quantum gravity force to do its cognitive magic.

 

The essence of “near-infinite resources AI” was worked out by Ray Solomonoff in the 1960’s, and goes by the name of Solomonoff Induction.  The mathematical theory of Solomonoff induction is complex (Solomonoff, 1978; Legg, 1997); but the basic idea can be simply expressed in an informal manner.   What I’ll describe here is just one among many ways of concretizing the Solomonoff induction idea in a concrete AI design; there are many others.  The details given here are not exactly those of Solomonoff induction, but the philosophical notion is the same.

 

Suppose you have a system with: some goals, receptors and actuators allowing it to perceive and act in an environment, a large amount of memory and processing power, and flexibility to rewrite most of its memory however it likes.  The goals and the environment can be made as complex as one likes.  Then the system can achieve arbitrarily great intelligence, defined in terms of satisfying its goals in its environment, as follows.

 

Suppose that the system’s memory is divided into two parts: the metaprogram, and the current program.   Since we are positing extremely ample computational resources, there is no problem allowing the current program and the metaprogram to operate simultaneously at all times. 

 

The metaprogram works as follows.  It creates and maintains a huge table which records, for each time t in the system’s history:

 

 

At each point in time, it searches the space of all programs of size less than N (where N is very, very large), and for each program it assesses two qualities:

 

 

The metaprogram then chooses a program that maximizes a combination of size and utility.

 

In other words, it asks itself:“What is a simple program P so that, if I had obeyed P in the past, I would have had a high probability of achieving my goals?” 

 

The most desirable program P is then selected as the current program, and allowed to govern the system for the immediate future – until the metaprogram finds something better.

 

Obviously, the metaprogram is acting here as a general pattern recognition engine.  In finding the optimal program P, it is searching for patterns in the past history of the system, patterns of historical perceptions and current actions that have led to goal-achievement.

 

Now, the problem with this approach to AI is fairly obvious: the memory and processing requirements are absurdly severe!  However Solomonoff induction is useful as a proof of concept.  What it shows is that achieving real AI is basically a matter of memory and processing efficiency.  If these weren’t an issue, AI would be easy.  It would require just a few hundred lines of code in a high-level programming language.   The only assumption this proof of concept makes is the Church-Turing thesis, or more accurately, the variant of the Church-Turing Thesis which states that the mind/brain’s activity can be described computationally.

 

One approach to making practical AI would be to start with Solomonoff induction and try to reduce the memory and processing requirements.  Perhaps the most straightforward way to do this is to use genetic programming. 

 

In genetic programming, one begins with a goal, and searches to find a program that fulfills this goal.  The search algorithm is not a brute force search, but a cleverly self-focusing search that relies on successive refinement of a population of potential solutions using evaluation, mutation and crossover operations.  The details of GP are well-described in the literature (Koza, 1985), and are also described in more detail in a later chapter, so I won’t go into them here.

 

Unlike simple Solomonoff induction, GP is a tractable approach to AI in some cases.  When the goal is simple and the environment is not too complex, it works fine.  But it doesn’t scale well enough to be used as the basis for a large-scale, real-world AI system.  GP evolution times, on modern hardware, are days for moderately complex problems; for the problem of driving a real system toward complex goals, millennia might be an optimistic estimate.

 

Of course, GP is not the only approach one could take to juicing up Solomonoff induction.  One could also, for instance, use recurrent backpropagation neural networks,  given an appropriate mapping into the input and output neurons of the nets.  However, this would be even more ridiculously inefficient than GP. 

 

Is the Solomonoff induction approach, then, entirely worthless from a practical perspective?  The answer, we believe, is: Almost, but not quite.  Solomonoff induction does embody a valid insight into pragmatic AI engineering, but an insight that is only practically useful after a huge amount of preliminary design and engineering work has been done, along quite different lines.

 

The insight is that a general self-modifying AI system may be considered as having basically the same structure as the Solomonoff AI described above.  There is a “current program” controlling the system, and a “metaprogram” that changes the current program based on meta-level learning.  We believe this design is in fact critical for achieving maximally intelligent AI – AI more intelligent than its creators.  But, in order for this to work in practice, the metaprogram needs to be doing something better than exhaustive search of all programs, and the current program needs to be something other than a randomly selected program, or else it will not generate even remotely interesting data for the metaprogram to look at. 

 

 

 

5.  Varieties of Self-Modification

 

 

Finally, it is perhaps worth exploring this concept of self-modifying intelligence in a little more detail. 

 

Suppose, as above, we have a system, which is attempting to achieve some goals relative to some environment.

 

There are at least two different senses in which we can say that this system is self-modifying.  These may be quantified as three different quantities associated with the system at a given time

 

 

The notion of cause is itself a complex one, whose full definition is elusive, and whose adequate definition is difficult.  In short, my approach to causation is to define it as a combination of two factors.  A causes B if

 

 

Note the occurrence of “simplicity” here.  Essentially this simple mechanism is a particular type of pattern.

 

This is not a complete rendition of the human common-language notion of causation, but after careful study I have come to believe it comes sufficiently close for most practical purposes.

 

Given the notion of causality, the intelligent noetic self-modification is very simple to understand.  The system’s state is changing all the time.  If its state changes seem to cause intelligence increases, then we can say it’s changing itself in a way that makes it smarter.

 

The intelligent dynamical self-modification is a little trickier to quantify.  We wish to define the degree to which changes in the “cognitive dynamics” by which a system operates correlate with increases in the system’s intelligence.  

 

But the problem is,  the cognitive dynamics of the systems cannot be assumed to be explicitly known.  They may be implicit in the behavior of the system.  We may know the very-low-level mechanical dynamics of the system, but changes in these are not necessary in order for the system to be considered as changing its cognitive dynamics over time. 

 

We need to think about the implicit dynamics of a system.  An implicit dynamic is basically a pattern in the system’s change over time.  If a process is a good model of the observed changes in some of the system’s components, then we can call it an implicit dynamic in the system.

 

The intelligent dynamical self-modification can then be defined as the degree to which changes in the system’s implicit dynamics cause increases in the system’s intelligence.

 

In most sophisticated AI systems, both kinds of self-modification will be going on all the time.  However, my conjecture is that

 

 

It is possible that this conjecture could be turned into a theorem with a sufficient amount of work and the addition of a sufficient number of qualifying conditions.  I have made no serious effort to do so, however.

 

Finally, how do these notions relate to the Novamente AI system? 

 

All of Novamente’s cognitive dynamics are specifically oriented toward this kind of self-modification: changing the state of the system to make the system achieve its goals more effectively.

 

Intelligent dynamical self-modification is to some extent implicit in intelligent noetic self-modification, but, in Novamente, it becomes much more significant when schema learning becomes a major part of the system’s dynamics.  Schema learning is the mechanism by which Novamente explicitly modifies procedures by which it does things.  When the system begins modifying its own basic cognitive dynamics, then intelligent dynamical self-modification becomes truly prevalent.  And it becomes truly overpowering when the system becomes able to rewrite its own sourcecode and transform itself into something entirely different than it once was.

 

5.1  Substructural Self-Modification

 

While developing the concepts of noetic and dynamical self-modification, an attempt was made to define a related concept of substructural self-modification.  This was intended to measure the degree by which the system modifies its own “basic components.”  

 

The intuitive concept is relatively simple.  For instance, if a human learns new thought processes, this modifies his brain’s dynamics to a certain extent.  On the other hand, if a human augments his brain with a neurochip that enables him to do arithmetic and algebra at high speeds, this modifies his knowledge base and dynamics to a very large extent. 

 

One would like to say that the neurochip expands the dynamical repertoire of the system in some fundamental sense.  Once the chip is there, the system can do a lot of things it could not have done otherwise: for instance, compute 1757775555*466464444 in a tenth of a second. 

 

On the other hand, it is not clear to us at this point how  this is really qualitatively different from dynamical self-modification.  After all, learning a new cognitive skill allows one to carry out activities that could not have been carried out otherwise.

 

The difference seems to be mainly one of extent.  A neurochip suddenly allows a huge amount of new “implicit dynamics.”   It thus allows the formation of all sorts of system states that would have been impossible otherwise. 

 

In the physical world, without modifying its basic components, there is going to be a ceiling to the intelligence that any finite system can assume.  “Substructural self-modification” in the intuitive sense is thus necessary for dynamical and noetic self-modification to continue beyond a certain point.

 

To formalize the notion of substructural self-modification, it seems to be necessary to take a slightly different point of view.   For instance, one can consider a physical system in terms of an hierarchy of abstract state machines M1, M2,…,Mk, where each state of Mi is defined as a set of states of Mi-1, and the transitions between states of Mi are consistent with the transitions permitted between states of Mi-1.   In the case of a computer program we may have, roughly speaking,

 

M1 = computer hardware

M2 = operating system

M3 = programming language

M4 = AI program

M5 = Easily mutable parts of AI program

 

Generally speaking, intelligent modifications to lower levels of the hierarchy will tend to cause higher degrees of noetic and dynamical self-modification.  “Substructural” self-modification as in the neurochip example occurs at a lower level than modification of learning algorithms via experiential adaptation.  In Novamente, modifications to the underlying source code occur at a lower level than modifications to schema operating with a fixed-source-code system.

 

 

 

References

 

Chaitin, Gregory (1987).  Algorithmic Information Theory.

 

Goertzel, Ben (1993).  The Structure of Intelligence.  Springer-Verlag, New York

 

Goertzel, Ben (1994).  Chaotic Logic.  Plenum, New York

 

Koza, John (1985).  Genetic Programming.

 

Legg, Shane (1997).  Solomonoff Induction. Online at citeseer.nj.nec.com/legg97solomonoff.html

 

Solomonoff, R.J. (1978).  Complexity-based induction systems: comparisons and convergence theorems.  IEEE Transactions on Information Theory, 24, 422-432