Structure of Intelligence -- Copyright Springer-Verlag © 1993

Back to Structure of Intelligence Contents

Appendix 3

A Quick Review of Boolean Logic

    Leibniz was probably the first person to make a serious attempt to analyze thought mathematically. But Leibniz's pioneering efforts in this direction never inspired much enthusiasm. The project never really got off the ground until George Boole published Laws of Thought. Boole, like Leibniz before him, had the idea of formulating a mathematical theory which does for ideas what ordinary algebra does for numbers. And, unaware that Leibniz had worked on the problem, he began by duplicating Leibniz's efforts.

    One might think that an algebra of thought would have to be something dramatically new. But in fact what Leibniz and Boole independently conceived was a synthesis of numerical algebra and ordinary language. Anthropologically, one might observe that, having committed themselves to expressing thoughts in terms of sequences of ink marks, they turned immediately to the two best-developed systems of sequences of ink marks. They arrived at the concept of a "proposition."

PROPOSITIONS

    A proposition is a simply a statement which is either true or false, and not both. For instance, it seems fairly clear that "George Boole wrote a book called "Laws of Thought" and "There is an Arab ruler who can play the bagpipes through his ear" are propositions. The first one is true and, as far as I know, the second is false. On the other hand, it also seems fairly clear that "Ione Skye is ten times prettier than Kim Basinger" and "Why are you reading this claptrap?" and "XXXXPPGttkeykeykey" are not propositions. The first is not really well-defined, and the second and third are not statements at all.

    Strictly speaking, propositions are not sentences. A proposition is an abstraction. A sentence can represent a proposition. But two different sentences can represent the same proposition. For instance, "The U.S. is capitalist and Russia is socialist" represents the same proposition as "Russia is socialist and the U.S. is capitalist." The two sentences are not word-for-word identical, but they are equivalent in the sense that if either one is true, then theother is true; and if either one is false, then the other is false.

    One way to deal with this situation is to introduce the strange notion of possible worlds. That is: consider the set of all universes that might possibly exist. Then a proposition may be considered as a sort of cosmic labeling device which assigns to each possible universe either the label "true," the label "false," or the label "meaningless" -- depending on whether, given the conditions of that universe, the proposition is true, false or meaningless. And it is clear that two statements which assign the same label to every universe correspond to the same proposition.

    For example, to a universe in which the Bolshevik revolution failed and Russia became a democratic-capitalist constitutional monarchy like England, the proposition "The U.S. is capitalist and the Russia is socialist" would assign the label "false." In that universe, the proposition is false. To a universe in which a comet pulverized the earth in 10,000 B.C., the proposition "The U.S. is capitalist and Russia is socialist" would assign the label "meaningless" -- in that universe, there is no such thing as the U.S. To our universe, of course, this proposition would assign the label "true". And it would also assign the label "true" to an infinity of alternate possible universes -- for example, the universe which is exactly the same as ours except that the last word in this sentence is not "pomegranates."

    There are certain propositions, such as "2+2=4," which we would like to say are true in any possible world. Philosophers refer to propositions of this sort as analytic propositions. More poetically, Saint Augustine called them "eternal verities." Those propositions which are true in some possible worlds but false in others are called contingent propositions. This distinction is interesting from a metaphysical point of view, and it is particularly relevant to the theory of mind if one takes a Nietzschean point of view. According to Nietzsche, true propositions are true only in that they are so extremely useful we cannot imagine giving them up. In this interpretation an analytic proposition is one which is useful to an intelligent entity in any environment whatsoever.     The possible worlds approach obviously rests upon shaky ground: after all, how do we decide which worlds are possible? We only live in one world, all the rest are just conjecture.      However, it is indisputable that each of us makes his way through the world by continually speculating as to how the future might turn out, and also how the past might have been. In each of our minds, there is an array of possible worlds. It turns out that the notion of possible worlds also arises in the analysis of consciousness.

BOOLEAN ALGEBRA

    The fundamental operations of numerical algebra are addition and multiplication. Correspondingly, Boole suggested, the fundamental operations of mental algebra must be "or" and "and" (disjunction and conjunction). In order to write these mental operations, Boole borrowed the notation of numericalalgebra. For instance, if we set

        X = "George Boole wrote a book called Laws of Thought"

                and

        Y = "There is an Arab nation whose ruler can play the bagpipes through              his ear,"

                then

        X+Y = "Either George Boole wrote a book called Laws of Thought, or                 there is an Arab nation whose ruler can play the bagpipes                 through his ear."

                and

        XY = "George Boole wrote a book called Laws of Thought, and there                 is an Arab nation whose ruler can play the bagpipes through his                 ear."

That is, in Boole's notation, "X and Y" is written XY, and "X or Y" is written X+Y.

    Now, addition and multiplication are only two of the four operations of numerical algebra. What about subtraction and division? It turns out that there is no real counterpart to division in the realm of propositions. But subtraction is a different matter. Subtraction, Boole observed, is analogous to negation. In his scheme, "minus" in the realm of numbers corresponded to "not" in the realm of thought. So, for example, where X is as defined above,

        -X = "George Boole did not write a book called Laws of Thought"

                and

        -Y = "There is not an Arab nation whose ruler can play the bagpipes                 through his ear."

    In numerical algebra, we have the rule X+0=X, for any X whatsoever. In the Boolean scheme, this rule can be translated into the algebra of thought by using the symbol "0" to denote the "empty proposition," the proposition which says nothing whatsoever. In symbols: 0=" ". The statement X+0 then means      "either X, or nothing whatsoever." And since "either X, or nothing whatsoever" is true whenever X is true, and false whenever X is false, it is equivalent to X. By similar reasoning, it follows that 0xX=0.

    Boolean algebra has its own set of rules, similar but not equivalent to the rules of numerical algebra. For instance, in Boolean algebra -X + -Y = - XY. This is easy to see -- all it means is that "either x is false or y is false" is the same as "not both x and y are true." But the same rule obviously does not hold for numerical algebra. The complete list is as follows, where 0 is the "zero element" (the proposition which says nothing) and 1 is the "unit element" (the proposition which contains all other propositions):

Commutative Laws:

    X+Y=Y+X, XY=YX

Distributive Laws:

    X+(YZ)=(X+Y)(X+Z), X(Y+Z)=XY+XZ

Identity Laws:

    X+0=0, 1X=X

Complement Laws:

    A+(-A)=1, A(-A)=0

    These rules form an interesting and important mathematical structure. However, it seems rather unlikely that they are the "laws of thought", for three reasons.

    For one thing, they govern only deduction, whereas (as will be emphasized in the pages to follow) induction and analogy are equally important to mental process. I think it is a mistake to assume, as is often done among computer scientists (Indurkhya, 1988), that induction and analogy may be understood as special cases of deduction.

    Also, it is not at all clear that the mind actually deduces according to Boolean algebra. As pointed out in Chapter 8, there are serious contradictions between Boolean algebra and common sense logic. Contemporary logicians are seeking to remedy these with new axioms systems called "paraconsistent" and "relevance" logics, but the case is far from closed.

    And, finally, it is not obvious that the brain gets along well with Boolean algebra. For instance, the distributive rule X(Y+Z) = XY + XZ seems very straightforward: all it says is that "X and either Y or Z" is the same as "either X and Y or X and Z". However, quantum theory indicates that when X, Y and Z represent propositions about physical situations, this is not always true. This simple observation, first made by Birkhoff and von Neumann (1932) has given rise to an entire new field: quantum logic (Mittelstaedt, 1978).

LOGICAL PARADOXES

    The first recorded logical paradox is that of Epiminides the Cretan, who proclaimed that "All Cretans are liars." If we believe him, then we should disbelieve him. But if we disbelieve him, then that is evidence that we shouldbelieve him.

    Of course, this is a very rough paradox. After all, it could be that only Epiminides is a liar. What is generally referred to as Epiminides' Paradox is the refined version: "This sentence is false." Here there is no escape. If it is false, it is true; if it is true, it is false.

    Berry's paradox is equally intriguing. What is the smallest number that cannot be described in English in one hundred characters or fewer? Whatever it is, is it not described by the phrase "the smallest number that cannot be described in English in one hundred characters or less"? This phrase is in English, and it uses fewer than one hundred characters. Therefore there is no number that cannot be described in English in one hundred characters or less. And yet this is not possible: there is an infinity of numbers, but only a finite number of English sentences of length less than one hundred.

    And consider Russell's paradox. The barber in a certain town shaves all the residents except those who shave themselves. Who shaves the barber? In mathematical language: let R denote the set of all sets which do not contain themselves. Does R contain itself?

    For centuries such conundrums were considered childish irrelevancies. But suddenly, around the turn of the century, they came to assume a tremendous importance for philosophers and mathematicians.

    Mathematical logicians had succeeded in unifying all of mathematics into a small set of rules for manipulating a small set of symbols -- into a simple "formal system". Actually there was not just one formal system, there were many varieties. None of them was quite as simple as Boolean algebra, but most of them were similar to it in many respects. Through the efforts of Peano, Frege, Bertrand Russell, Alfred North Whitehead and others, everything from geometry to calculus to number theory was expressed in terms of these elementary ideas. It appeared that mathematics had finally been made completely exact.

    It is important to understand the full implications of this (apparent) achievement. First of all, it implied that every single mathematical statement could be expressed in this one formal language -- every statement about triangles, quadratic equations, calculus, nine-dimensional spheres, abstract topologies, whatever. And secondly, it indicated that every single true mathematical statement -- but no false mathematical statements -- could be arrived at by an application of the rules of this system. Therefore, doing mathematics was reduced to dealing with the symbols and the rules of this system. Or so it was thought at the time.

    But before long this tremendous leap, the result of decades of ingenious research, was plagued with two serious difficulties -- both of which centered around simple conundrums like the Paradox of Epiminides. The second, and most damaging, of these difficulties was Godel's Theorem. The first was the Theory of Types.

    The formalizations of mathematics that they had arrived at permitted certain forms of these paradoxes (Russell's paradox in particular) as valid mathematicalstatements. The problem was the following elementary fact: if a system of mathematical rules permits one of these paradoxes as valid, then for every X which is valid in it, -X is also valid.

    To see this, assume for instance that Epiminides' Paradox is an admissible statement in a formal system S which incorporates Boolean algebra. Let G = "This sentence is false". Then G implies -G, and -G implies G. So if G is true in S, then so is -G. But given this fact, one can prove that any statement whatsoever is true. For take an arbitrary statement B. Then, since G is true, "G + B" is true. Hence "-G(G+B)" is true. But "-G(G+B)" implies B.

    What good is a system of mathematics which cannot prove any statement true without also proving it false? Not much good, everyone realized -- you have to get rid of the paradoxes. Russell and Whitehead came up with one way of doing so: the theory of types. This is not essentially a mathematical idea: it has more to do with the general theory of reference. The theory of types can be applied to any language, formal or informal. It is nothing more or less than a way of organizing a set of statements.

THE THEORY OF LOGICAL TYPES

    In order to organize a set of statements according to the theory of types, one must first distinguish a set of "basic elements." These basic elements are supposed to be what the statements are fundamentally about. For instance, they might be numbers, or physical objects. These basic elements are assigned logical type 0.

    Next, one must isolate a set of statements which are statements about the basic elements. These statements are assigned logical type 1. For instance, if the basic elements are physical objects, then statements like "the cat is on the mat" and "the log is in the bog" are of logical type 1.

    Next, one isolates a set of statements which are statements about either basic elements or type 1 statements, or both. These are assigned logical type 2. For instance, "You were lying when you said the cat was on the mat" is of logical type 2 -- it is a statement about a statement about physical objects.     Similarly, a statement of logical type 3 is a statement about entities of logical type 0,1 or 2. A statement of logical type 4 is a statement about entities of logical type 0, 1, 2 or 3. And so on. What's the point of all this? Well, where does a statement like "This statement is false" fit in? It is a statement about itself. There is no way to reduce it to a statement about basic elements, or a statement about statements about basic elements, or a statement about statements about statements about basic elements.... The point is that an statement of logical type n cannot refer to another statement of logical type n. For instance, it cannot refer to itself.

    If one requires that all mathematical statements have a logical type, then the paradoxes disappear. They are no longer well-formed mathematical statements. There is a problem with this, however: it also rules out innocuous self-referential statements like "This sentence is true." It is not paradoxical to say "this sentence is true", and there is no real reason to forbid such utterances.

    It may seem that there is no great harm in getting rid of statements like "this sentence is true." After all, what good did saying "this sentence is true" ever do anyone? But there is no need to pursue this point, because the recent work of Paul Aczel has rendered it moot. He has given a formalization of mathematics which permits nonparadoxical self-reference without permitting paradoxical self-reference.

    In fact, it turns out that not only is there no need to rule out "this sentence is true" and other innocuous self-references -- but there are some circumstances where "vicious" self-references like "this sentence is false" may come in handy. Certainly, we should forbid them to enter our mathematical deductions. But the work of G. Spencer-Brown (1970), Gregory Bateson (1980) and others has shown that, treated carefully, they can be useful.

    The theory of types is indirectly relevant to the perceptual/motor hierarchy as described in Chapter 9, in which higher and higher levels may be understood to correspond to higher and higher logical types.