Biological versus computational
approaches to creative problem solving
Robert Arp
Department of Philosophy
Imagine
being a dentist in the early part of the 19th century. Now, imagine going to the dentist to have a
tooth pulled in the early part of the 19th century. In those days, pulling teeth was a painful
experience for the patient, as there were no known anesthetics being used at
the time. The kinds of things a dentist
used to help ease the patient’s pain before a tooth extraction might have
included having the patient suck on a medicinal herb that produces a numbing
effect in the mouth, placing ice upon the gums, getting the patient to drink
alcohol before the procedure, or any combination thereof. Such were the methods that Dr. Horace Wells
likely used to solve the problem of pain associated with tooth extractions
while working as a dentist in
One
evening in 1844, Dr. Wells attended an amusing public demonstration of the
effects of inhaling a gas called nitrous oxide with his friend, Samuel
Cooley. Mr. Cooley volunteered to go up
on stage to inhale the gas, and proceeded to do things like sing, laugh, and
fight with the other volunteers who had inhaled the gas. In the scuffle, Cooley received a deep cut in
his leg before sobering up and coming back to his seat next to Dr. Wells. Someone noticed a pool of blood under Cooley’s
seat, and it was discovered that Cooley had cut his leg. However, Cooley seemed to be unaffected by
and was unaware of the wound. Upon
witnessing this event, a light went on in Dr. Wells’ head: What if this
laughing gas could be used during tooth extraction to ease a patient’s
pain? The problem of pain associated
with tooth extraction finally might be solved!
In fact, over the next several years Dr. Wells proceeded to use nitrous
oxide, was successful at painlessly extracting teeth from his patients, and the
seeds of modern anesthesia were sewn (Roberts, 1989).
Humans
are resourceful animals. We can imagine
Dr. Wells prescribing the various remedies of which he was familiar – the
medicinal herb, the ice, the alcohol – in the attempt to ease a patient’s pain
during tooth extraction. In such a case,
we would have an instance of what Mayer (1995) has called routine problem
solving, whereby a person recognizes many possible solutions to a problem
given that the problem was solved through one of those solutions in the
past. People constantly perform routine
problem solving activities that are concrete and basic to their survival,
equipping them with a variety of ways to “skin” the proverbial cat, as well as
enabling them to adapt to situations and re-use information in similar
environments.
However,
we also can engage in activities that are more abstract and creative such as
invent tools based upon mental blueprints, synthesize concepts that, at first
glance, seemed wholly disparate or unrelated, and devise novel solutions to
problems. When Dr. Wells decided to use
nitrous oxide on his patients, he pursued a wholly new way to solve the
problem of pain. This was an instance of
what Mayer (1995) has called nonroutine
creative problem solving. Nonroutine creative problem solving involves finding a
solution to a problem that has not been solved previously. The introduction of nitrous oxide in order to
extract teeth painlessly would be an example of nonroutine
creative problem solving because Dr. Wells did not possess a way to solve the
problem already, and he had not pursued such a route in the past.
Not
only do people make insightful connections like that of Dr. Wells, they take
advantage of serendipitous opportunities, invent products, manufacture space
shuttles, successfully negotiate environments, hypothesize, thrive, flourish,
and dominate the planet by coming up with wholly novel solutions to
problems. It is possible to speak about
creative problem solving from the biological, evolutionary perspective as well
as from a non-biological, computational perspective. After all, researchers have been able to get
both biological entities and alife creations –
those that are purely simulated, as well as those that are connected to robotic
mechanisms – to engage in fairly complex forms of problem solving. In this paper, after putting forward a view
of problem solving from the biological/evolutionary perspective that is rooted
in Mithen’s (1996, 1999) idea of cognitive fluidity,
I go on to argue that nonroutine creative problem
solving – as opposed to routine problem solving – most likely is only possible
for biologically conscious entities having an epigenetic history.
On
the negative side, I argue that the dry-mind or computational approach
to creative problem solving in biologically conscious minds is at the very
least deficient and, at best, should be combined with wet-mind
biological and evolutionary approaches.
I do this by showing that the neural circuitry necessary for creative
problem solving in humans evolved for the purposes of negotiating specific organic
environments and, in this sense, is fundamentally of a different kind from that
of computational circuitry. I also
present evidence of the failure of computational models to handle even routine
forms of problem solving, let alone nonroutine forms
of creative problem solving.
On the positive side, I acknowledge that computational models of
simulated (or virtual) mental activity, set up to “evolve” in simulated
environments, are useful to our understanding of how real minds function in the
real world. Thinkers doing work in
cognitive science and artificial intelligence have made important contributions
to biology and neuroscience, and there is a mutual benefit to be gained – in
terms of our understanding of the mind, its activities, and its evolution – by
these dry-mind and wet-mind groups working together.
Bissociation
and Mithen’s idea of cognitive fluidity
It
is important to elaborate further upon the distinction between routine problem
solving and nonroutine creative problem solving. From the previous section, we already know
that routine problem solving deals with the recognition of many possible
solutions to a problem, given that the problem was solved through one of those
solutions in the past. Here, we can link
routine problem solving to the kind of trial-and-error strategizing and
calculation that animals other than human beings typically engage in, although
humans engage in routine problem solving as well. In this sense, routine problem solving
entails a mental activity that is stereotyped and wholly lacking in innovation,
because there are simply perceptual associative connections being made
by the mind of an animal. Images in
perception or memory are associated with one another and/or with some
environmental stimuli so as to learn some behavior, or produce some desired
result. If that result is not achieved,
an alternate route is pursued in a trial-and-error fashion.
For
example, Olton & Samuelson (1976) showed that rats are able to associate
routes in a maze with food acquisition.
In these experiments, food was placed at the end of each arm of an 8-arm
radial maze, and a rat was placed in the center of the maze and was kept in the
maze until all the food was collected.
At first, the rat did not associate a certain path with the food. But after trial-and-error, the rat eventually
got all of the food. In subsequent
tests, the food was placed in the same spot in the maze, and the same rat was
able to more quickly and efficiently associate the correct pathway with the
acquisition of food.
Associative
learning tests have been performed on humans and animals numerous times (Maier,
1932; Zentall et al., 1990; Dickinson, 1980; Rescorla, 1988; Macphail, 1996,
1998; Mackintosh, 1983, 1995; Hall, 1994, 1996). In his famous delayed matching to sample
tests, Hunter (1913) demonstrated that rats, raccoons, and dogs are able to
associate memories of a stimulus with the same stimulus perceived by the
animal, so as to solve some problem.
Wright (1989, 1997) has shown that pigeons and monkeys can perform
similar associations (also Roberts & Grant, 1974). A typical battery of I.Q. tests will have
several association tests whereby people are asked to solve routine problems,
such as linking a word to a picture and/or linking pictures to one another in a
familiar sequence (Sternberg, 1996, 2000, 2001).
Concerning
nonroutine creative problem solving, we already know
that this entails pursuing a wholly new way to solve a problem that has not
been solved previously, and that the problem-solver did not possess a way to
solve the problem already. Here,
however, we can draw a distinction between solving a nonroutine
problem through imitation with another’s help, and solving a nonroutine problem on one’s own. Some animals appear to have the capacity to
solve nonroutine problems, once the solutions have
been shown to them, or imitated for them.
Consider
the following cases that demonstrate an animal’s ability to creatively problem
solve through imitation with another’s help.
An octopus studied by Fiorito et al. (1990)
has been documented as being able to unpop the cork
on a jar to get at food inside.
Initially, the octopus could see the food in the jar, but was unable to unpop the cork of the jar to get at the food. The next time, Fiorito
et al. unpopped the cork while the octopus was
watching, resealed the jar, and gave it to him in his tank. The octopus was able to associate the unpopping of the cork with the acquisition of food,
apparently remembered what Fiorito et al. had shown
him, and unpopped the cork himself to get at the
food.
Also,
we have documented chimps trying a couple of different ways to get at fruit in
a tree – like jumping at it from different angles or jumping at it off of tree
limbs – before finally using a stick to knock it down. Scientists also document young chimps
watching older chimps do the same thing (Tomasello,
1990; Tomasello et al., 1987, 1993; Byrne, 1995;
Savage-Rumbauch & Boysen,
1978; Whiten et al., 1996, 1999). Like
the octopus’ problem solving ability, this seems to be a form of nonroutine creative problem solving by use of another’s
help.
In
fact, several observations have been made of various kinds of animals engaged
in imitative behaviors: Whiten & Custance (1996),
Whiten et al. (1996), Tomasello et al. (1993), and Abravanel (1991) has documented imitative behaviors in
chimpanzees and children; Parker (1996), Miles, Mitchell, & Harper (1996),
Call & Tomasello (1994), and Russon
& Galdikas (1993, 1995) have witnessed young orangutans
imitating older orangutans using sticks and rocks to gather food, as well as
throw sticks and rocks at other orangutans in self-defense; Yando,
Seitz, & Ziqler (1978), Mitchell (1987), and
Moore (1992) report mimicry and imitation in birds; and Heyes
& Dawson (1990) and Heyes, Jaldon,
& Dawson (1992) note evidence of imitative behaviors in rats.
However,
the number of possible solution routes is limited in these examples of routine
problem solving. If either the octopus’
corked jar was sealed with Crazy Glue, or there were no sticks around, or there
were no other older chimps or researchers around to show younger chimps how to
use sticks, the octopus and chimpanzees in the above cases likely would starve
to death. The possible solution routes
are limited because the mental repertoire of these animals is environmentally
fixed, and their tool usage (if they have this capacity) is limited to
stereotypical kinds of associations.
Bitterman (1965, 1975) tested the intelligence levels of
fish, turtles, pigeons, rats, and monkeys with a variety of tasks, including
pushing paddles in water, pecking or pressing lighted disks, and crawling down
narrow runways. Although such animals
improved their abilities to perform these tasks as time went on, Bitterman found that these species only could perform a
limited number of associative learning tasks.
This data, along with the data concerning the octopus, chimps,
orangutans, rats, and birds supports the idea that these animals are engaged in
mostly habitual, stereotyped forms of associative thinking and learning (cf.
the new research concerning crows and other birds in Weir, Chappell, & Kacelnik, 2002; Emery & Clayton, 2004; Reiner, Perkel, Mello, &
Jarvis, 2004).
Unlike routine problem solving, which deals
with associative connections within familiar perspectives, nonroutine
creative problem solving entails an innovative ability to make connections
between wholly unrelated perspectives or ideas.
Again, this kind of problem solving can occur as a result of imitation through
another’s help – as in the above octopus and chimpanzee examples – as well as
on one’s own. A human seems to be the
only kind of being who can solve nonroutine problems on
his/her own, without imitation or help.
This is not to say that humans do not engage in solving nonroutine problems through imitation; in fact, nonroutine creative problem solving by imitation occurs all
of the time, especially in the earlier years of a human’s life. This is just to say that humans are the only
animals who have the potential to consider wholly new routes to problem
solving.
Koestler (1964)
referred to this quality of the creative mind as a bissociation
of matrices. When
a human bissociates, that person puts together
ideas, memories, representations, stimuli, and the like in wholly new and
unfamiliar ways for that person.
Echoing Koestler, Boden
(1990) calls this an ability to “juxtapose formerly unrelated ideas” (p. 5; see
Terzis 2001). Thus, Dominowski (1995) claims that
“overcoming convention and generating a new understanding of a situation is
considered to be an important component of creativity” (p. 77).
When
animals associate, they put together perceptions, memories, representations,
stimuli, and the like in familiar ways.
For example, my cat associates my loud voice with her being in trouble
(and runs away), the rat associates a route with food, and the octopus
associates corked-jar-experiment-B with corked-jar-experiment-A and more
quickly can unpop the cork on the jar to get at the
food in subsequent tests. As far as we
know, animals can associate only, so they always go for solutions to
problems that are related to the environment or situation in which they
typically reside. Humans bissociate, and are able to ignore normal associations, and
try out novel ideas and approaches in solving problems. Such an ability to bissociate
accounts for more advanced forms of problem solving whereby the routine or
habitual associations are the kinds of associations that precisely need to
be avoided, ignored or bracketed out as irrelevant to the optional solution
(Finke, Ward, & Smith, 1992). Bissociation also has been utilized in accounting for
risibility, hypothesis-formation, art, technological advances, and the
proverbial “ah-hah,” creative insight, eureka moments humans experience when
they come up with a new idea, insight, or tool (Koestler,
1964; Boden, 1990; Holyoak
& Thagard, 1995; Terzis,
2001; Davidson, 1995).
So,
when we ask how it is that humans can be creative, part of what we are asking
is how they bissociate, viz., juxtapose formerly
unrelated ideas in wholly new and unfamiliar ways for that person. To put it crudely, humans can take some idea
“found way over here in the left field of the mind” and make some coherent
connection with some other idea “found way over here in the right field of the
mind.” Humans seem to be the only
species that can engage in this kind of mental activity. How is this possible?
Steven Mithen (1996, 1999)
has put forward the notion of cognitive fluidity, an idea that serves the
purpose of enabling one to respond creatively to nonroutine
problems in environments. Mithen’s idea has merit because, as he notes, he is an
archeologist who is applying the hard evidence of evolutionary theory, fossils,
and toolmaking to psychology. Not only is he speculating about the mind,
but he has the archeological evidence to support his speculations. Fodor (1998), Calvin (2004), and Stringer
& Andrews (2005) praise Mithen’s idea of
cognitive fluidity as being a significant hypothesis, as well as consistent
with archeological and neurobiological evidence. As a philosopher of mind and biology, I
applaud Mithen’s hypothesis as well (Arp, 2005a, 2005b, 2006a, 2006b, 2007a, 2007b).
Mithen (1996) sees the evolving mind as going through a
three-step process. The first step
begins prior to 6 mya when the primate mind was
dominated by what he calls a general intelligence. This general intelligence consisted of an
all-purpose, trial-and-error learning mechanism that was devoted to multiple
tasks. All behaviors were imitated,
associative learning was slow, and there were frequent errors made, much like
the mind of the chimpanzee.
The
second step coincides with the evolution of the Australopithecine line,
and continues all the way through the Homo lineage to H. neandertalensis.
In this second step, multiple specialized intelligences – or
modules, as evolutionary psychologists call them – emerge alongside general
intelligence. Associative learning
within these modules was faster, and more complex activities could be
performed. Compiling data from
fossilized skulls, tools, foods, and habitats, Mithen
concludes that H. habilis probably had a
general intelligence, as well as modules devoted to social intelligence
(because they lived in groups), natural history intelligence (because they
lived off of the land), and technical intelligence (because they made
tools). Neandertals
and H. heidelbergensis would have had all of
these modules, including a primitive language module, because their skulls
exhibit bigger frontal and temporal areas.
According to Mithen, the neandertals
and members of H. heidelbergensis would have
had the Swiss Army knife mind that evolutionary psychologists speak about (Cosmides & Tooby, 1987, 1992,
1994; Pinker, 1994, 1997, 2002; Shettleworth, 2000;
Gardner, 1993; Scher & Rauscher,
2003; Plotkin, 1997; Palmer & Palmer, 2002).
At this
point, we note a criticism Mithen makes about the
evolutionary psychologists who think that the essential ingredients of mind
evolved during the Pleistocene epoch. It
concerns the simple fact that modern-day humans deal with a whole different set
of problems to overcome than did our Pleistocene ancestors. We can look back to the environment of the
Pleistocene and note how certain cognitive features emerge, and become part of,
the normal genetic makeup of the human.
However, as Mithen (1996) asks: “How do we
account for those things that the modern mind is very good at doing, but which
we can be confident that Stone Age hunter-gatherers never attempted, such as
reading books and developing cures for cancer” (pp. 45-46)?
The emergence of distinct mental modules during the Pleistocene
that evolutionary psychologists like Cosmides, Tooby, and Pinker speak about as being adequate to account
for learning, negotiating, and problem solving in our world today cannot
be correct. For Mithen, the potential variety of problems encountered in
generations subsequent to the Pleistocene is too vast for a much more limited
Swiss Army knife mental repertoire; there are just too many situations for
which nonroutine creative problem solving
would have been needed in order to not only simply survive, but also to
flourish and dominate the earth. Pinker
(2002) thinks that there are upwards of 15 different domains, and various other
evolutionary psychologists have their chosen number of mental domains (e.g.,
Buss, 1999; Shettleworth, 2000;
Here
is where the third step in Mithen’s (1996) evolution
of the mind comes into play known as cognitive fluidity. In this final step, which coincides with the
emergence of modern humans, the various mental modules are working together
with a fluid flow of knowledge and ideas between and among them. The information and learning from the modules
can now influence one another, resulting in an almost limitless capacity for
imagination, learning, and problem solving.
The working together of the various mental modules as a result of this
cognitive fluidity is consciousness for Mithen,
and represents the most advanced form of mental activity.
Mithen uses the schematization of the construction of a
medieval cathedral as an analogy to the mind and consciousness. Each side chapel represents a mental
module. The side chapels are closed off to
one another during construction, but allow people to have access from the outside
to attend liturgies, much like mental modules are closed off to one another
(encapsulated) and have specified input cues.
Once the cathedral chapels have been constructed and the central domed superchapel is in place, the doors of all of the chapels
are opened, and people are allowed to roam freely from chapel to chapel. Analogously, modern humans have evolved the
ability to allow information to be freely transmitted between and among mental
modules, and this cognitive fluidity comprises consciousness.
Mithen goes on to note that his model of cognitive fluidity
accounts for human creativity in terms of problem solving, art, ingenuity,
religion, and technology. His idea has
initial plausibility, since it is arguable that the neandertals
died off because they did not have the conscious ability to re-adapt to the
changing environment. It is also
arguable that humans would not exist today if they did not evolve consciousness
to deal with novelty (Arp, 2005a, 2005b, 2006b,
2007a; Bogdan, 1994; Cosmides
& Tooby, 1992; Gardner, 1993; Humphrey, 1992;
Pinker, 1997). No wonder, then, that
Crick (1994) maintains: “without consciousness, you can deal only with
familiar, rather routine situations or respond to very limited information in
new situations” (p. 20). Also, as Searle (1992) observes: “one
of the evolutionary advantages conferred on us by consciousness is the much
greater flexibility, sensitivity, and creativity we derive from being
conscious” (p. 109). Modular
processes can be used to explain how the mind functions in relation to
routinely encountered features of environments (Jackendoff,
1987; Dennett, 1991; Hale, 1989; Kimura, 1989; Crick & Koch, 1999). However, depending on the radicalness
of a novel environmental feature, inter-modular processes (Mithen’s cognitive fluidity) may be required to deal
effectively and, at times, creatively with the problem.
At
this point, we must speak about the importance of the effect that novel
environments have on the brain. There is
now solid evidence that the environment contributes to the formation,
maintenance – even re-growth or co-opting – of neurons and neural processes in
the brain. For example, it has been
shown that neuronal size and complexity, as well as numbers of glial cells, increase in the cerebral cortices of animals
exposed to so-called enriched environments, viz., environments where there are
large cages and a variety of different objects that arouse curiosity and
stimulate exploratory activity (Diamond, 1988; Diamond & Hopson, 1989; Mattson,
Sorensen, Zimmer, & Johansson, 1997; Receveur
& Vossen, 1998).
Also, there is recent data suggesting that regions of the brain can be
trained, through mental and physical exercises, to pick up tasks from other
regions (Holloway, 2003; van Praag, Kempermann, & Gage, 2000; Schwartz & Begley, 2002;
Clayton & Krebs, 1994).
The
implications of synapse strengthening from environmental stimuli, as well as
the ability of neuronal processes to perform alternate functions, are integral
to an evolutionary explanation of conscious creative problem solving in
humans. This so because the novel
promotes unusual or extreme stimulation of cells, such stimulation of cells
causes new connections to be made in the brain, new connections cause better
response of the animal to external stimuli, and better response causes
likelihood of survival so as to pass genes on to progeny.
Again,
environment is only half of the two-sided biological coin that includes nurture
(the environmental influence) as well as nature (the genetic influence). On the genetic side, chance mutations cause a
trait – like the neocortex and consciousness that
emerges from it – to come to be, this trait may be useful in some environment,
the animal with that trait may survive to pass it on to its progeny, and this
is an endless progressing cycle of genetic adjustment, re-adjustment,
adjustment, re-adjustment, etc. It is
wholly plausible that the mental properties necessary for creative problem
solving evolved from this interplay of genes and a novel environment. Thus, Barlow (1994) maintains: “Anything that
improves the appropriateness and speed of learning must have immense
competitive advantage, and the main point of this proposal is that it would
explain the enormous selective advantage of the neocortex. Such an advantage, together
with the appropriate genetic variability, could in turn account for its rapid
evolution and the subsequent growth of our species to its dominant position in
the world” (p. 10). This
aforementioned information is significant to Mithen’s
account of cognitive fluidity because, given the novelty our early hominins dealt with in their environments, we can see how
it would have been possible for newer connections between areas of the brain to
have been made, as well as how wholly new connections could have arisen, acting
as the neurobiological conditions for cognitive fluidity.
Mithen’s idea of cognitive fluidity helps to explain our ability to bissociate because the potential is always there to make innovative, previously unrelated connections between ideas or perceptions, given that the information between and among modules has the capacity to be mixed together, or intermingle. So in essence, cognitive fluidity accounts for bissociation, which accounts for human creativity in terms of problem solving, art, ingenuity, and technology. This is not to say that the information will in fact mix together, and then be bissociated by an individual. This is just to say that there is always the potential for such a mental process to occur in our species. In the words of Finke et al. (1992): “people can generate original images that lead to insight and innovation or commonplace images that lead nowhere, depending on the properties of those images” (p. 2).
Dry mind versus wet mind
Now, it is possible to speak about creative problem solving from the
biological, evolutionary perspective as well as from a non-biological,
computational perspective. After all,
researchers have been able to get both biological entities and alife creations – those that are purely simulated, as well
as those that are connected to robotic mechanisms – to engage in fairly complex
forms of problem solving. The dry-mind,
computational approach to problem solving begins with the idea that the
processes and procedures that computers go through are analogous to the mind’s
processes and procedures. Since the
1950s, computers have been utilized as a model for the workings of the mind
(Turing, 1950; von Neumann, 1958; Jackendoff, 1987; Holyoak & Thagard, 1995;
Copeland, 1993). Such an idea makes
sense, since it seems that both the mind and a computer do things like
calculate, compute, and engage in algorithmic processes, as well as solve
problems. Also, such a close connection
between computers and minds makes sense because, after all, it is human
minds that program computers to go through computations, calculations,
algorithms, and the like!
Some thinkers have become so enamored by the close affinity between
computational processes and mental processes that they think
the mind just is a computational process. This is called the strong artificial
intelligence position. Those
thinkers who share this fundamental metaphysical position will pursue
methodological routes whereby they set up computer programs and robotic
mechanisms, run various tests using these programs and mechanisms, and then
utilize the results to make direct inferences about the workings of the
mind. By this thinking, then, one day it
will be possible to construct computers, robots, androids, synthetics, or cyborgs that will not only behave like human beings, but
also will have the same mental states as human beings, including
consciousness. Kosslyn
& Koenig (1995) call this the dry-mind approach to understanding the
mind, in contradistinction to the wet-mind approach.
The dry-mind approach to understanding the mind is called dry
because the investigation of the mind takes place at the computational level,
wholly removed from the hardware of the brain on which such mental
activity takes place, or from which such mental activity emerges. Consider Pylyshyn’s
(1980) dry-mind claim that “in studying computation it is possible, and in
certain respects essential, to factor apart the nature of the symbolic process
from the properties of the physical device in which it is realized” (p.
115). Conversely, if one studies the
processes and systems of the brain itself to understand the workings of the
mind, then one is dealing with the moist piece of matter that is the substrate
for such computational types of processing.
Given that the brain is a moist piece of matter, Kosslyn
& Koenig (1995) call this methodology wet.
In his influential work on vision, Marr (1983) described three levels of
analysis of perceptual information processing.
The first is a computational level, whereby we seek the information
processing goal or task. The second
level deals with how the information is represented, as well as the algorithm
for transforming it. The third level
deals with how the algorithm is implemented in the hardware. We can note two important implications of
this tripartite system for the distinction between dry-mind and wet-mind
approaches.
First, the first two levels of Marr’s system are consistent with the
position in the philosophy of mind and cognitive science known as computational
functionalism. Computational
functionalists generally are advocates of strong artificial intelligence, and
they see the mind in terms of the causal/functional relations between inputs,
outputs, and other mental (i.e., functional) states of some system. The physical realization in some system is
not the essence of the mind; rather, the mind is characterized in terms of its
role in relating inputs to outputs, and its relation to other functional
components of the system (see Block, 1994; Fodor, 1983, 1998; Putnam, 1960; Churchland, 1986).
Second, the third level of Marr’s system deals with how algorithms can be
implemented in the hardware. Note the
usage of the word hardware.
According to functionalists who advocate strong artificial intelligence,
it is possible for intelligent (software) behavior to be realized or emerge
from any potential material medium (hardware).
If silicon and metal in a robot or some plasma in an alien is set up to
function in the right way, then such robots or aliens will have mental
states. Further, depending upon the
sophistication of the causal connections, such robots or aliens would have
conscious mental states. The important
point is that it makes no difference to the functionalist what material medium
is utilized, as long as that material medium is functioning in the right way so
as to yield intelligent behavior (Block, 1980; Churchland,
1986; Mahner & Bunge,
2001).
A basic approach computational functionalists utilize in trying to
describe the mind is known as connectionism. According to advocates of connectionism, mind
can be described as a vast network of nodes, which are supposed to be
analogous to neurons, and whose different and variable excitation levels
explain mental activity (Feldman & Ballard, 1982; Bechtel & Abrahamsen,
2002; Smolensky, 1988; Rumelhart
& McClelland, 1985). Interestingly
enough, connectionist models are structured on the concept of the parallel
processing involved in neural networks.
Such a pattern consists of units, or nodes, that are linked together by
numerous pathways. Like the tripartite
neuronal layout of the brain, in a connectionist model there are generally
three classes of nodes, viz., input nodes, hidden nodes, and output nodes. Input nodes receive information from the
world, output nodes show the result of processing, and the hidden nodes in
between carry out the processing. Nodes
receive signals from other nodes, and output signals to other nodes, in this
huge interconnected parallel processing network. The mind, then, just is the entire networking
process. Dennett (1991) endorses this
view, and he notes that the consciousness mind “can be best understood as the
operation of a ‘von Neumannesque’ virtual machine
implemented in the parallel architecture of a brain” (see von Neumann,
1958). Dennett’s justification for this
position is simply this: “Since any computing machine at all can be imitated by
a virtual machine on a von Neumann machine, it follows that if the brain is a
massive parallel processing machine, it too can be perfectly imitated by a von
Neumann machine” (p. 217).
Now, such advocates of connectionism like Dennett (1986, 1991), Smolensky (1988), Pinker & Prince (1988), and Rumelhart & McClelland (1985) follow Holyoak & Thagard (1995) in
their belief that, since the workings of the mind “are not directly observable,
theorizing about them is necessarily analogical: we form hypotheses about
mental operations by comparing them to processes that we can more directly
observe. Computer programs are fully inspectable by us, so we can know the data structures they
use to represent information, as well as the algorithms that process
information by operating on those structures… Cognitive theories inspired by
the computational analogy hypothesize first a set of representational
structures and second a set of computational processes that operate on those
structures to produce intelligent behavior” (p. 238-9). This argument trades on the analogy between
computational processes and mental processes.
For this connectionist analogical argument to be made stronger, it would
be necessary to show that the brain and computational hardware are very similar
to one another. Interestingly enough,
when we do in fact investigate the relationship between the neuronal
wiring of the brain and the nodal wiring of computer hardware, we notice
some fundamental differences between the two.
The first difference between the brain and computational hardware has to
do with the fact that there are several types of neurons having a variety of
functions in the brain, whereas this is not the case for the computational
nodes at work in, for example, Thagard, Holyoak, Nelson, & Gochfeld’s
(1990) ARCS (Analog Retrieval by Constraint Satisfaction) and Holyoak & Thagard’s (1989)
ACME (Analogical Mapping by Constraint Satisfaction) models. Stellar, pyramidal, spindle, and chandelier
neurons have a variety of responses, as well as differing specialized functions
in the brain (Kandel, Schwartz, & Jessell, 2000; Churchland,
1986). By contrast, computational nodes
in ARCS and ACME follow one basic type of functioning having to do with what is
known as the on-off logic gate.
The on-off logic gate refers to the fact that computational nodes only
can respond to information in one of two ways, having to do with either an on-off,
yes-no, or 1-0 response, depending upon the program. Early on in the history of cognitive science,
the computer’s logic gates and the brain’s neurons both were thought to follow
this basic pattern; the neuron was conceived as either on (firing) or off (not
firing) (see Turing, 1950; von Neumann, 1958; Jackendoff,
1987; Holyoak & Thagard,
1995; Copeland, 1993). However, now we
know that whereas nodal logic gates are either on or off, neurons are never
fully off, but are said to maintain a resting potential prior to their
inhibitory or excitatory activity. In a
sense, neurons are always on to a certain degree (Kandel
et al., 2000). This represents a second
way in which the brain and computational hardware are different from one
another.
There are other ways in which neurons and nodes are different from one
another. (A) Neurons (and cells in
general) have the properties of internal-hierarchical self-maintenance and
homeostasis, whereas nodes do not. The
organelles of a cell are specialized in their activities and organized in such
a way so as to preserve the stability of the cell (Smolensky,
1988; Rumelhart & McClelland, 1985). (B) Kandel et al.
(2000) and Pinker & Prince (1988) have demonstrated that the speed at which
neurons process information is much slower than that of nodal connections. (C) A neuron can have several thousand dendritic connections with other neurons, whereas a typical
node can have only several connections at most (Kandel
et al., 2000; Smolensky, 1988; Rumelhart
& McClelland, 1985). (D) Unlike
computational networks that have specific locations for memory, in mammals
memory is not associated with any specific area of the brain, but is
distributed throughout the brain (Kandel et al.,
2000; Pinker & Prince, 1988). (E)
Finally, the brain is highly adaptable in the sense that certain groups of
neurons can perform alternate functions in the event of brain damage. There is a kind of flexibility or
malleability present in neuronal networks that is not found in nodal networks (Kandel et al., 2000; Copeland, 1993; Searle, 1992; Fodor,
2001; Churchland, 1986).
Now, simply to point out the differences between neurons and nodes is not
enough to show the deficiencies in the computational approach. We need to show that these differences
actually make a difference in terms of an organism’s ability to creative
problem solve, such that if the organism did not have a biological structure
with a specific evolutionary history, then it could not effectively creatively
problem solve. In other words, we need
to show that the biological particularities, complete with their evolutionary
history, matter to creative problem solving.
First of all, a biological organ like the brain, its genetic make-up, and
its environment are involved in a causal interplay, aiding each other in
evolving and developing. This dynamic
relationship of organism, genes, and environment is what thinkers refer to as epigenesis (e.g., Berra,
1990). Conscious brains affect
environments and biology, and environments and biology affect conscious
brains. This microcosmic interrelation
is representative of interrelations present at a macrocosmic level in all of
nature. All of nature is pliable, in
this epigenetic sense, and this is a fundamental insight that
Somewhat paradoxically (or ironically), if it were not for the fortuitous
genetic mutations within these specific biological entities and the
specific environmental shifts that make up the specific evolutionary history
of these specific biological entities, organisms would never have evolved
light/dark sensitivity areas, brains, mental states, and then abilities to
creatively problem solve. At the same
time, if organisms did not have an epigenetic history whereby they engaged in
some forms of proto-typical brain process functioning and problem solving, they
never would have been able to eventually creatively negotiate
environments so as to survive and flourish.
This interplay is one whereby genes affect brains, environment affects
brains, brain affects genes, brains affects
environments, etc., in this huge epigenetic cycle.
The neural circuitry necessary for creative problem solving in humans
evolved for the purposes of negotiating specific organic
environments. Traits (e.g., neurons,
organs, psychological states) develop in evolutionary history to function as a
result of chance mutations and the natural selection of the trait that is most
fit, given the particular environment in which the trait exists. The varieties of neurons in an animal,
complete with their specialized functioning, testify to the influence of the
environment. There are
olfactory receptor neurons specialized to process odors, auditory receptor
neurons specialized to process sound waves (both through processes of
transduction), motor neurons specialized to produce muscular contractions, interneurons facilitating speed of signal between and among
neurons, etc. In the visual system, some
neurons are responsive to colors, some are responsive to movement, while other
are responsive to lines in specific orientations, faces, stereoptic,
and several other features of objects in the environment. Neurons come in a wide variety of types,
having a variety of specialized functions and differing greatly in terms of
their size, axonal length, and characteristic pattern of dendritic
arborization.
Their myriad forms and functions are dependent upon internal and
external environmental factors (Kandel et al., 2000; Churchland, 1986).
The
Gestalt psychologists noted that the visual system has built-in mechanisms
whereby the visual scene is grouped according to the following principles: closure,
the tendency of the visual system to ignore small breaks or gaps in objects; good
continuation, the tendency to group straight or smoothly curving lines
together; similarity, the tendency to group objects of similar texture
and shape together; and proximity, the tendency to group objects that
are near to one another together.
Numerous studies have ratified these principles as reflective of the
visual system (Wertheimer, 1912, 1923; Kanizsa, 1976,
1979; Peterhans & von der
Heydt, 1991; Gray, 1999; Sekuler
& Blake, 2002). These mechanisms can
work only if the environment actually displays the features on which the
visual system is capitalizing. There
must be these kinds of regularities out there in the world, or else it seems
these principles would not be able to be delineated (Brunswik
& Kamiya, 1953).
Over 100 years ago, James (1892) noted that mind and world “have evolved
together, and in consequence are something of a mutual fit” (p. 4), and the
Gestalt principles underscore this mutual fit.
Events
in the physical environment are composed of materials that the human brain,
complete with its specialized modules, has evolved to perceive and
discriminate. These materials take forms
ranging from specific chemicals, to mechanical energy, to electromagnetic
radiation (light), and are discriminated by the different sensory modalities
that are specifically attuned to these stimuli.
The important point to note is that the various specialized modules
never would have come to be if it were not for the specific organic
environment in which the organism found itself.
Likewise, the mind contains specific mental modules that evolved in our
past to solve specific problems of survival, such as face recognition, mental
maps, intuitive mechanics, intuitive biology, kinship, language acquisition,
mate selection, and cheating detection – again – given the particular
environment in which the early hominin existed. Adaptive problems are “problems that are
specifiable in terms of evolutionary selection pressures, i.e., recurring
environmental conditions that affect, or have affected, the reproductive
success of individual organisms” (Wheeler & Atkinson, 2001, p. 242). It was the specific environmental pressures
of the Pleistocene that acted as the conditions for the possibility of creative
problem solving. The successful
progression from the typical jungle environments to the atypical and novel
savannah-type environments of our early hominin
ancestors was the occasion for a mental capacity to emerge that creatively
could handle the new environment. Those
specific organic/natural conditions make all the difference when
describing conscious creative problem solving in humans.
It seems fundamentally misguided to investigate mental processes wholly
divorced from the brain on which these mental processes are realized. Churchland (1986)
makes at least four points that dry-mind computational functionalists should keep
in mind:
1. Our mental states and processes are states and processes of our
brains. 2. The human nervous system
evolved from simpler nervous systems. 3.
Brains are by far the classiest information processors available for study. In matters of adaptability, plasticity,
appropriateness of response, motor control, and so forth, no program has ever
been devised that comes close to doing what brains do – not even to what lowly
rats do. If we can figure out how brains
do it, we might figure out how to get a computer to mimic how brains do
it. 4. If neuroscientists are working on
problems of memory and learning, studying cellular changes, synaptic changes,
the effects of circumscribed lesions, and so forth, it is perverse for a
cognitive scientist trying to understand memory and learning to ignore
systematically what the neuroscientists have discovered. (p. 362)
Further, Sober (1990) titles one of his articles “Putting the function
back into functionalism,” noting that the mind’s information processing (functionalism)
should not be wholly divorced from the biological processes (functions) upon
which this information depends (cf. Ruse, 1971, 1973). It seems that all three of Marr’s
levels of analysis of information processing should be taken into consideration
when investigating a biologically-based entity’s mental functioning. The specific hardware of the brain does make
a difference to the processing of information when we consider the specific
evolutionary history of the brain.
Connectionism and problem solving
The issue of the hardware of the brain and computational networks aside,
as Churchland intimates in the quotation above,
another important test of intelligence has to do with the behaviors that
organisms and computational mechanisms exhibit.
Ever since Turing’s (1950) test, in conjunction with experiments
performed in behavioral psychology, one of the ways in which we can judge
whether some thing is intelligent is by how well it solves problems. We run worms and rats through mazes to solve
problems in order to get a sense of their intelligence level. We also run robots through police and
military tests to see if they can solve the problems of not being shot or blown
up by a bomb. The problem for
connectionists (and other functionalists alike) is that such robots, running
programs based on computational models, consistently fail at even the simplest
of routine problem solving exercises (Gerkey
& Matari, 2002; Franz, 2003; Moravec,
1999a, 1999b, 2000; cf. Brooks, 1991).
Minimally, this failure calls the legitimacy of the computational
approach into question. After discussing
the fact that we “still don’t have the fabled machine that can make breakfast
without burning down the house” or “even one that can learn anything much
except statistical generalizations,” Fodor (2001) notes that the “failure of
our AI is, in effect, the failure of the Classical Computational Theory of the
Mind to perform well in practice. Failures of a theory to perform well in practice are much like
failures to predict the right experimental outcomes (arguably, indeed, the
latter is a special case of the former)” (pp. 37-8).
Moravec (1999a, 1999b, 2000) speculates that by
2010, robots with connectionist networks of 5,000 MIPS
(million-instructions-per-second) will achieve the cognitive status comparable
to a lizard. Provided the money is there
for further research, such robots will be followed by mouselike
(100,000 MIPS), monkeylike (5 million MIPS), and humanlike (100 million MIPS)
robots in generations to come. Now, is
it possible that one day in the future artificial beings will become creative
problem solvers? Yes. However, I believe along with Searle (1992)
that this would be a reality only if we could simulate biological processes in
other material media. Is it also possible
that other beings on other planets lacking neuro-matter
could be conscious? Yes. However, what matters is how it is that human beings on this planet in this
world have become conscious creative problem solvers. This is why I think it is invaluable to
investigate the biology and evolution of the mind.
Why is it that robots having dry-mind computational networks break down
while trying to complete minimally complex tasks in routine situations (Finke
et al., 1992; Churchland, 1993; Dreyfus, 1992; Moravec, 1999a, 1999b, 2000)? Part of the answer is that they lack the
conscious abilities to flexibly select and integrate information. They just do not fall into the category of
things having an evolutionary history, out of which these conscious abilities
emerged. If such computationally-based
mechanisms cannot engage successfully in routine forms of problem solving, then
how could they engage successfully in nonroutine
creative forms of problem solving?
Do we really think that, one day, computationally-based mechanisms will
be able to make the kind of bissociative connection
that Dr. Wells made concerning the use of nitrous oxide?
A lack of consciousness – understood as Mithen’s
form of cognitive fluidity – is a good candidate for explaining why, for example,
ants can get thrown off track so easily by introducing novel stimuli in their
environments (Moravec, 1999b; McFarland & Bosser, 1993), moths get confused and kill themselves in
flames or luminescent bug-killers (Moravec, 1999b;
McFarland & Bosser, 1993), connectionist networks
are unable to distinguish syntax from semantics (Searle, 1990), the robotic eye
cannot discern that the following are all the same word: CAT, Cat, Cat, Cat,
Cat (Copeland, 1993), and robots continually blow themselves up in military
training exercises while trying to negotiate environments as they are being
bombarded by novel stimuli (Moravec, 1999a, 2000;
also Churchland, 1993; Fodor, 2001).
Following
Sternberg (2001), it seems that processes for deciding what to do, learning what
to do, and learning how to do – viz., those processes rightly associated with
unconscious modularity (as is present in ants and moths) and/or computational
efficiency (as is present in a pre-programmed robot) – are not enough in
negotiating this incredibly complex and diverse world. A conscious process (again, like the
one Mithen envisions) seems necessary in our problem
solving, whether we attempt to cope with new situations, select new
environments when old ones become unsatisfactory, creatively re-use information
from old environments in new ones, coherently set up goals via the selectivity
and integration of information, synthesize disparate concepts, construct
theories to explain phenomena, or invent tools based upon mental blueprints
(Crick, 1994; Searle, 1992; Finke et al., 1992).
I
believe that if an artificially intelligent being with a dry mind is to become
creative, such a being must be conscious.
But consciousness is a functionally-emergent property of the brain
of the human species specifically, and brains are organs with an epigenetic
history (which includes the interplay of genotypic and phenotypic
characteristics abiding by evolutionary principles). My argument goes, then, that if an
artificially intelligent being is going to become creative, it will need some
kind of epigenetic history. By
transposition: if a being does not have an epigenetic history, then it
cannot be a biological organism having a brain; if such a being lacks a brain,
then it cannot be conscious; and without consciousness, a being cannot
creatively problem solve. I think
this lack of an evolutionary history is what accounts for many of the problems
thinkers have with the dry-mind computational approach to understanding the
mind of a biological entity (see Copeland, 1993; Dreyfus, 1992; Kosslyn & Koenig, 1995; Zornetzer,
Davis, & Lau, 1990; Churchland & Sejnowski, 1992; McFarland & Bosser,
1993; Born, 1987).
The philosophical upshot of my investigation of the relationship between
dry-mind and wet-mind is to make clear that wet-mind biological and
evolutionary approaches have amassed enough information such that dry-mind
approaches can now take cues from these burgeoning wet-mind sciences. It used to be that functionalists and other
cognitive scientists were looked to as having an accurate picture of how the
mind works, while biologists and evolutionists were looked to secondarily as
having no real philosophical say in mental phenomena (Putnam, 1960; Fodor,
1983; Barlow, 1994; Lycan, 1995; Block, 1994). Interestingly enough, this kind of
functionalist view undermines its own intentions to be grounded
materialistically, since the mind is viewed as a new kind of computational or
linguistic epiphenomenon that does not look to matter, viz., the
brain!!!, for its identity.
However, with the advances made in the neurobiological and evolutionary
sciences, dry-mind can (and really should) look to wet-mind for some
guidance. Following Chuchland
(1986), “if we can figure out how brains do it, we might figure out how to get
a computer to mimic how brains do it” (p. 362).
In fact, the move cognitive scientists have made into parallel
distributed processing (PDP) demonstrates that cognitive science is looking to
neuroscience in trying to explain the workings of the mind – rather than the
other way around – since PDPs are set up to reflect actual
neural networks more accurately. As
was noted above, connectionists model their networks after the tripartite
neuronal system of the brain.
This is not to say that work in cognitive science and artificial
intelligence is unimportant; to the contrary, we have learned much about the
brain, mind, human behavior, and evolution because of such research (e.g.,
Bechtel & Abrahamsen, 2002; Lek & Guegan, 2000; Hinton & Nowlan,
1987; Barto, 1985; Smith, 1987; Christiansen & Chater, 1994). This
is just to say that cognitive scientists need to take information, cues, and
clues from the biological, psychological, social, and evolutionary
sciences. In fact, for years researchers
in cognitive science and artificial intelligence have been utilizing
evolutionary principles when they construct their own simulated mental
apparatuses and environments.
For example,
In an article entitled “Evolving parallel computation,” Thearling & Ray (1997) have applied a natural selection
model to their software system they call TIERRA. TIERRA simulated the replication of
multi-cellular organisms in complex and changing environments over several
generations. At first, the virtual
organisms processed information in a slower, if-then, serial fashion. Their findings were impressive because,
through several generations and in several changing environments, the virtual
multi-cellular organisms were able to “evolve” parallel processing mechanisms
to deal with the information they were receiving from their virtual
environments. Other researchers have
utilized evolutionary principles as a starting point for their computational
models, whether it be to understand how computers can learn (Williams, 1988; Nolfi 1990; Yao & Liu, 1997,
1998; French & Sougne, 2001), exchange data (Lerman & Rudolph, 1994), discriminate in-coming
information in some virtual environment (Menczer
& Belew, 1994), devise a primitive language
(Hadley, 1994), solve problems (Koza, 1990) or even
play checkers (Chellapilla & Fogel,
2001).
So it is understood by several researchers that, just as organic
processes do what they do because they have been selected for in an
evolutionary history (e.g., Goodale & Murphy,
2000; Kosslyn & Koenig, 1995; Allman,
2000; Desimone et al., 1984; Casagrande
& Kaas, 1994; Edelman & Tononi,
2000; Shallice, 1997; Marr, 1983; Sereno
et al., 1995), so too, computational systems can be modeled on similar virtual
or simulated “evolutionary” histories.
And further, the information gleaned from these simulated tests can help
in understanding the natural world, of which the mind is a part.
The end result is that biologists and evolutionists can now be looked to
as contributing significant pieces to the puzzle concerning the workings of the
mind. When all is said and done, I
support Churchland’s (1993) claim that cognitive
science should not be autonomous with respect to neuroscience, psychology, and
the other empirical sciences. I endorse
Fodor’s (1998) observation that archeology and the biological sciences are good
places to uncover the nature of the mind.
I concur with Pinker (1994), echoing Chomsky, that if research in artificial
intelligence is to study effectively the mind, then it needs “constant
integration across disciplinary lines” (p. 15).
Finally, I agree with Donald (1997) that the “problem of cognitive
evolution demands the widest possible range of information, (from) neurolinguistics, anthropology, paleontology, neuroanatomy, and especially cognitive psychology” (p.
356).
Acknowledgements
I wish to
thank George Terzis, Brian Cameron, Eric LaRock, and Benoit Hardy-Vallee
for comments on earlier versions of this paper.
References
Abravanel, E. (1991). Does immediate
imitation influence long-term memory for observed actions? Journal
of Experimental Child Psychology, 21, 614-23.
Allman, J. (2000). Evolving brains.
Arp, R. (2005a). Scenario
visualization: One explanation of creative problem solving. Journal of
Consciousness Studies, 12, 31-60.
_____,
(2005b).
Selectivity, integration, and the psycho-neuro-biological continuum. Journal of Mind and
Behavior, 6&7, 35-64.
_____,
(2006a).
Awareness and consciousness: Switched-on rheostats. Journal of Consciousness
Studies, forthcoming.
_____, (2006b). The environments of our Hominin ancestors, tool usage, and scenario visualization.
Biology & Philosophy, 21, 95-117.
_____,
(2007a).
Scenario visualization: An evolutionary account of vision-related creative
problem solving.
_____,
(2007b).
An integrated approach to the philosophy of mind.
Barlow, H. (1994). What is
the computational goal of the neocortex? In C. Koch
& J. Davis (Eds.), Large-scale neuronal theories of the brain (pp.
1-22).
Barto, A. (1985). Learning by statistical
co-operation of self-interested neuron-like computing elements. Human Neurobiology, 4, 229-56.
Bechtel,
W., & Abrahamsen, A. (1991). Connectionism and the mind: An introduction to
parallel processing in networks.
_____,
(2002). Connectionism
& the mind: Parallel processing dynamics and evolution.
Berra, T. (1990). Evolution
and the myth of creationism. Stanford:
Bitterman, M. (1965). The
evolution of intelligence. Scientific American, 212,
92-100.
_____,
(1975). The comparative analysis of learning. Science, 188,
699-709.
Block, N. (1980).
Introduction: What is functionalism? In N. Block (Ed.),
_____,
(1994). Functionalism. In S. Guttenplan (Ed.), A companion
to the philosophy of mind (pp. 315-32).
Boden, M. (1990). The creative mind: Myths
and mechanisms.
Bogdan, R. (1994). Grounds for
cognition: How goal-guided behavior shapes the mind.
Born, R.
(Ed.), (1987). Artificial intelligence: The case against.
Brooks, R. (1991). Intelligence without reason. Proceedings of the Twelfth
International Joint Congress on Artificial Intelligence, San Mateo, California,
12, 569-95.
Brunswik, E., & Kamiya, J. (1953). Ecological cue-validity of
proximity and other Gestalt factors. American Journal of Psychology,
66, 20-32.
Buller, D. (2005). Adapting
minds: Evolutionary psychology and the persistent quest for human nature.
Buss, D. (1999). Evolutionary psychology: The new science
of the mind.
Byrne,
R. (1995). The thinking ape: Evolutionary origins of intelligence.
Call,
J., & Tomasello, M. (1994). The social learning
of tool use by orangutans (Pan pygmaeus).
Human Evolution, 9, 297-313.
Calvin, W. (2004). A brief history of the mind: From apes
to intellect and beyond.
Casagrande, V.,
& Kaas, J. (1994). The afferent,
intrinsic, and efferent connections of primary visual cortex in primates.
In A. Peters & K.
Chellapilla, K.,
& Fogel, D. (2001). Evolving an expert
checkers playing program without using human expertise. IEEE
Transactions on Evolutionary Computation, 5, 422-28.
Christiansen, M., & Chater, N.
(1994). Generalisation and connectionist language learning. Mind
and Language, 9, 273-87.
Churchland, P. S. (1986). Neurophilosophy:
Toward a unified science of the mind-brain.
_____, (1993). The co-evolutionary
research ideology. In A. Goldman (Ed.),
_____, (1997). Can neurobiology teach us anything about
consciousness? In N.
Block et al. (Eds.), The nature of
consciousness (pp. 127-40).
Churchland, P.
S., & Sejnowski, T. (1992). The
computational brain: Models and methods on the frontiers of computational
neuroscience.
Clayton,
N., & Krebs, J. (1994). Hippocampal growth and attrition in birds affected by experience. Proceedings of the
Copeland, J. (1993). Artificial intelligence.
Cosmides, L., & Tooby, J. (1987). From evolution to behavior: Evolutionary psychology
as the missing link. In J. Dupre (Ed.), The latest on the best: Essays on evolution and
optimality (pp. 27-36).
_____,
(1992). The psychological foundations of culture. In J. Barkow, L. Cosmides, & J. Tooby (Eds.), The
adapted mind (pp. 19-136).
_____,
(1994).
Origins of domain specificity: The evolution of functional organization. In L. Hirschfeld & S. Gelman
(Eds.), Mapping the mind: Domain specificity in
cognition and culture (pp. 71-97).
Crick, F. (1994). The astonishing hypothesis.
Crick, F.,
& Koch, C. (1999). The problem of consciousness. In A.
Damasio (Ed.), The Scientific American book of the
brain (pp. 311-24).
Davidson, J. (1995). The suddenness of insight. In R. Sternberg & J. Davidson
(Eds.), The nature of insight (pp.
7-27).
Deacon, T. (1997). The symbolic species: The co-evolution
of language and the brain.
Dennett,
D. (1986). Content and consciousness.
_____, (1991). Consciousness explained.
Desimone, R., Albright, T., Gross, C., & Bruce, C.
(1984). Stimulus-selective properties of inferior temporal
neurons in the macaque. Journal of Neuroscience, 4, 2051-62.
Diamond, M. (1988). Enriching heredity.
Diamond,
M., & Hopson, J. (1989). Magic trees of the mind.
Dickinson,
A. (1980). Comparative animal learning theory.
Dominowski, R. (1995). Productive problem solving. In S. Smith, T. Ward, & R.
Finke (Eds.), The creative cognition
approach (pp. 73-96).
Donald, M.
(1991). Origins of the modern mind.
_____, (1997). The mind considered from a historical
perspective. In D. Johnson & C. Erneling (Eds.), The future of the cognitive revolution (pp.
355-65).
Dreyfus,
H. (1992). What computers still can’t do: A critique of artificial reason.
Edelman, G., & Tononi, G. (2000).
Reentry and the dynamic core: Neural correlates of conscious experience. In T. Metzinger
(Ed.), Neural correlates of consciousness (pp. 139-56).
Emery, N., & Clayton, N. (2004). The mentality of crows:
Convergent evolution of intelligence in corvids and
apes. Science, 306, 1903-1907.
Feldman, J., & Ballard, D. (1982). Connectionist
models and their properties. Cognitive Science,
6, 205-54.
Fiorito, G., et al. (1990). Problem
solving ability of Octopus vulgaris Lamark (mollusca, cephalopodo). Behavior and
Neural Biology, 53, 217-30.
Finke,
R., Ward, T., & Smith, S. (1992). Creative cognition: Theory, research
and applications.
Fodor, J. (1983). The modularity of mind.
_____, (1985). Précis of ‘The Modularity
of Mind’. The Behavioral and Brain Sciences, 8, 1-42.
_____, (1998). In critical condition: Polemical essays on
cognitive science and the philosophy of mind.
_____, (2001). The mind doesn’t work that way: The scope
and limits of computational psychology.
Frank, E., & Wenner, P. (1993).
Environmental specification of neuronal connectivity. Neuron, 10, 779-85.
Franz, M. (2003). Robots with cognition?
Tübinger Wahrneh mungs konferenz, 6,
Proc. 38.
French, R., & Sougne, J. (2001).
Connectionist models of learning, development & evolution.
Gardner,
H. (1993). Multiple intelligences: The theory in practice.
Gerkey, B.,
& Matari, M. (2002). Pusher-watcher: An
approach to fault-tolerant tightly-coupled robot coordination. Proceedings of the IEEE International Conference on Robotics and
Automation ICRA, 2002, 464-9.
Goodale, M.,
& Murphy, K. (2000). Space in the brain: Different neural substrates
for allocentric and egocentric frames of reference. In T. Metzinger
(Ed.), Neural correlates of consciousness (pp. 189-202).
Gray,
C. (1999). The temporal correlation hypothesis of visual feature integration:
Still alive and well. Neuron, 24, 31-47.
Hadley,
R. (1994). Systematicity in connectionist language learning. Mind
and Language, 9, 247-71.
Hale,
M. (1989). Mechanisms of the mind: An evolutionary perspective.
Hall,
G. (1994). Pavlovian conditioning: Laws of
association. In N. Mackintosh (Ed.), Animal learning and cognition (pp.
15-43).
_____, (1996). Learning about associatively activated
stimulus representations: Implications for acquired equivalence and perceptual
learning. Animal Learning and Behavior, 24,
233-55.
Heyes, C., &
Dawson, G. (1990). A demonstration of observational
learning in rats using a bidirectional control. Quarterly Journal of
Experimental Psychology, 42B, 59-71.
Heyes, C., Jaldon, E., & Dawson, G. (1992). Imitation in
rats: Initial responding and transfer evidence from a bidirectional control
procedure. Quarterly Journal of Experimental Psychology,
45B, 229-40.
Hinton,
G., & Nowlan, S. (1987). How learning can guide
evolution. Complex Systems, 1, 495-502.
Hirschfeld, L., & Gelman, S. (1994). Toward a topography
of mind: An introduction to domain specificity. In L. Hirschfeld
& S. Gelman (Eds.), Mapping
the mind: Domain specificity in cognition and culture (pp. 3-35).
Holloway,
M. (2003). The mutable brain. Scientific
American, 289, 78-85.
Holyoak, K.,
& Thagard, P. (1989). Analogical
mapping by constraint satisfaction. Cognitive Science, 13,
295-355.
_____, (1995). Mental leaps: Analogy in creative thought.
Humphrey,
N. (1992). A history of the mind: Evolution of the birth of consciousness.
Hunter, W. (1913). The delayed reaction in animals and children. Behavior
Monographs, 2, 1-86.
Jackendoff, R. (1987). Consciousness
and the computational mind.
James, W. (1890). The principles of
psychology.
Johnson-Laird, P. (1988). The computer
and the mind: An introduction to cognitive science.
Kandel, E., Schwartz, J. & Jessell, T.
(Eds.), (2000). Principles of neural science.
Kanizsa, G. (1976). Subjective
contours. Scientific American, 234, 48.
_____, (1979). Organization
in vision: Essays on gestalt perception.
Kimura,
D. (1989). Left hemisphere control of oral and brachial
movements and their relation to communication. Philosophical
Transactions of the Royal Society, Series B, B292, 135-49.
Koestler,
A. (1964). The act of creation.
Kosslyn, S.,
& Koenig, O. (1995). Wet mind: The new cognitive neuroscience.
Koza, J. (1990). Genetic Programming: A paradigm for
genetically breeding populations of computer programs to solve problems. Technical Report STAN-CS-90-1314,
Lek, S., & Guegan, J. (2000). Artificial neuronal networks:
Application to ecology & evolution.
Lerman, G., & Rudolph, L.
(1994). Parallel evolution of parallel processors.
Lycan, W.
(1995). Consciousness.
Mackintosh,
N. (1983). Conditioning and associative learning.
_____,
(1995). Categorization by people and by pigeons. Quarterly
Journal of Experimental Psychology, 48B, 193-214.
Macphail, E. (1996). Cognitive
function in mammals: The evolutionary perspective. Cognitive
Brain Research, 3, 279-90.
_____,
(1998). The evolution of consciousness.
Mahner, M., & Bunge, M. (2001). Function and functionalism: A synthetic perspective. Philosophy
of Science, 68, 75-94.
Maier, N. (1932). A study of orientation in the rat. Journal
of Comparative Psychology, 14, 387-99.
Mandler, G. (1995). Origins and consequences of novelty. In S. Smith et al.
(Eds.), The creative cognition approach (pp.
9-26).
Marr, D. (1983). Vision.
Mattson,
B., Sorensen, J., Zimmer, J., & Johansson, B. (1997). Neural grafting to
experimental neocortical infarcts improves behavioral outcome and reduces
thalamic atrophy in rats housed in enriched but not in standard environments. Stroke,
28, 1225-31.
Mayer, R. (1995). The search
for insight: Grappling with Gestalt psychology’s unanswered questions. In R.
Sternberg & J. Davidson (Eds.), The
nature of insight (pp. 3-32).
McFarland,
D., & Bosser, T. (1993). Intelligent
behavior in animals and robots.
Menczer, F., & Belew, R. (1994). Evolving sensors in environments of
controlled complexity. In R. Brooks (Ed.), Proceedings
of the Fourth Conference on Artificial Life (pp. 33-45).
Miles, H., Mitchell, R.,
& Harper, S. (1996). Simon says: The development of imitation in an enculturated orangutan. Journal
of Comparative Psychology, 105, 145-60.
Mitchell, R. (1987). A comparative developmental approach to understanding imitation.
Perspectives in Ethology, 7, 183-215.
_____,
(1993).
Mental models of mirror self-recognition: Two theories. New Ideas in
Psychology, 11, 295-325.
Mithen, S. (1996). The prehistory of the
mind: The cognitive origins of art, religion and science.
_____,
(1999). Handaxes and ice age carvings: Hard evidence for the
evolution of consciousness. In S. Hameroff, A.
Kaszniak, & D. Chalmers (Eds.), Toward a
science of consciousness: The third
Moore, B. (1992). Avian
movement imitation and a new form of mimicry: Tracing the evolution of a
complex form of learning. Behavior, 122,
231-63.
Moravec, H. (1999a). Robot: Mere
machine to transcendent mind.
_____, (1999b). Rise of the robots. Scientific American, 122, 124-35.
_____,
(2000).
Robots, re-evolving mind (pp. 1-9). On-line article:
www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/2000/Cerebrum
Nolfi, S. (1990). Learning
and evolution in neural networks. CRL Technical
Report 9019,
Olton, D.
& Samuelson, R. (1976). Remembrance of places passed: Spatial memory in rats. Journal
of Experimental Psychology: Animal Behavior Processes, 2, 97-116.
Palmer,
J., & Palmer, A. (2002). Evolutionary psychology: The ultimate origins of human
behavior.
Parker, S.
(1996).
Apprenticeship in tool-mediated extractive foraging: The origins of imitation,
teaching and self-awareness in great apes. Journal of Comparative
Psychology, 106, 18-34.
Patterson,
K., & Wilson, B. (1987). A ROSE is a ROSE or a NOSE: A deficit in initial letter
identification. Cognitive Neuropsychology,
7, 447-77.
Pearce, J. (1997). Animal learning and cognition.
Peterhans, E., & von der Heydt, R. (1991). Subjective contours –
bridging the gap between psychophysics and physiology. Trends
in Neuroscience, 14, 12.
Pinker, S.
(1994). The language instinct.
_____,
(1997). How
the mind works.
_____, (2002). The blank slate: The modern denial of
human nature.
Pinker, S., & Prince, A. (1988). On language and
connectionism: Analysis of a parallel distributed processing model of language
acquisition. In S. Pinker & J. Mehler
(Eds.), Connections and symbols (pp. 73-193).
Plotkin, H. (1997). Evolution in mind: An introduction
to evolutionary psychology.
Putnam,
H. (1960). Minds and machines. In S.
Hook (Ed.), Dimensions of mind (pp. 27-37).
Pylyshyn, Z. (1980). Computation and cognition: Issues in
the foundation of cognitive science. Behavioral and Brain
Sciences, 3/1, 111-34.
Receveur, H.,
& Vossen, J. (1998). Changing
rearing environments and problem solving flexibility in rats. Behavioral
Processes, 43, 193-210.
Reiner, A., Perkel, D., Mello, C., & Jarvis, E. (2004). Songbirds and the revised avian brain nomenclature. Annals
of the
Rescorla, R. (1988). Behavioral studies of Pavlovian conditioning. Annual Review
of Neuroscience, 11, 329-52.
Roberts, A., Robbins, T., & Weiskrantz,
L. (1998). The prefrontal cortex: Executive and cognitive functions.
Roberts,
R. (1989). Serendipity: Accidental discoveries in science.
Roberts, W., & Grant, D. (1974). Short-term memory in
the pigeon with presentation time precisely controlled. Learning and
Motivation, 5, 393-408.
Rumelhart, D., & McClelland, J.
(1985). PDP models and general issues in cognitive science. In D. Rumelhart & J. McClelland (Eds.), Parallel
distributed processing: Explorations in the microstructure of cognition, volume
1: foundations (pp. 110-46).
_____, (1973). The
philosophy of biology.
Russon, A.,
& Galdikas, B. (1993). Imitation
in free-ranging rehabilitant orangutans. Journal of
Comparative Psychology, 107, 147-61.
_____, (1995). Constraints on great apes’ imitation: Model
and action selectivity in rehabilitant orangutan (Pan pygmaeus) imitation. Journal of Comparative
Psychology, 109, 5-17.
Savage-Rumbaugh, E., & Boysen, S.
(1978). Linguistically-meditated tool use and exchange by chimpanzees (Pan troglodytes). Brain and Behavioral
Science, 4, 539-53.
Sekuler, R., & Blake, R.
(2002). Perception.
Scher, S., &
Rauscher, F. (2003). Nature read in truth or
flaw: Locating alternatives in evolutionary psychology. In S. Scher
& F. Rauscher (Eds.), Evolutionary psychology:
Alternative approaches (pp. 1-30).
Schwartz,
J., & Begley, S. (2002). The mind and brain: Neuroplasticity
and the power of mental force.
Searle,
J. (1990). Is the brain a digital computer? Proceedings and Addresses of the
American Philosophical Association, 64, 21-37.
_____,
(1992). The rediscovery of the mind.
Sereno, M., et
al., (1995). Borders of multiple visual areas in
humans revealed by functional magnetic resonance imaging. Science, 268, 889-93.
Shallice, T.
(1988). From neuropsychology
to mental structure.
_____, (1997). Modularity and
consciousness. In
N. Block et al. (Eds.), The Nature of consciousness (pp. 277-97).
Shettleworth, S.
(2000). Modularity and the evolution of cognition.
In
C. Heyes & L. Huber (Eds.), The
Evolution of cognition (pp. 43-60).
Simon,
H. (1975). The functional equivalence of problem-solving
skills. Cognitive Psychology, 7, 268-88.
Smolensky, P. (1988). On the
proper treatment of connectionism. Behavioral and Brain Sciences,
11, 1-23.
Sober, E.
(1990). Putting the function back into functionalism. In W. Lycan (Ed.), Mind and cognition
(pp. 97-106).
Sperber, D. (1994). The modularity of
thought and the epidemiology of representations. In L. Hirschfeld
& S. Gelman (Eds.), Mapping
the mind: Domain specificity in cognition and culture (pp. 39-67).
Sternberg, R. (1996). Cognitive psychology.
_____,
(2000). Practical intelligence in everyday life.
_____,
(2001). Complex
cognition: The psychology of human thought.
Stringer,
C., & Andrews, P. (2005). The complete world of human
evolution.
Terzis, G. (2001). How crosstalk creates vision-related eureka moments. Philosophical
Psychology, 14, 393-421.
Thagard, P., Holyoak,
K., Nelson, G., & Gochfeld, D. (1990). Analog
retrieval by constraint satisfaction. Artificial Intelligence, 46,
259-310.
Thearling, K., & Ray, T. (1997). Evolving
parallel computation. Complex Systems, 10, 1-8.
Tomasello, M. (1990). Cultural
transmission in the tool use and communicatory signaling of chimpanzees.
In S. Parker & K Gibson (Eds.), “Language” and
intelligence in monkeys and apes (pp. 274-311).
Tomasello, M. et al. (1987). Observational
learning of tool use by young chimpanzees. Human
Evolution, 2, 175-85.
Tomasello, M. et al. (1993). Imitative
learning of actions on objects by children, chimpanzees, and enculturated chimpanzees. Child Development, 64,
1688-1705.
Turing, A. (1950). Computing machinery and intelligence. Mind,
59, 433-60.
van Praag,
H., Kempermann, G., & Gage, F. (2000). Neural consequences of environmental enrichment. Nature Reviews Neuroscience, 1, 191-8.
von Neumann, J. (1958). The computer and the brain.
Weir, A.,
Chappell, J., & Kacelnik, A. (2002). Shaping
of hooks in New Caledonian crows. Science, 297,
981.
Weisberg, R. (1995). Case
studies of creative thinking: Reproduction versus restructuring in the real
world. In S. Smith et al. (Eds.), The
creative cognition approach (pp. 53-72).
Wertheimer,
M. (1912/1961). Experimental studies on the seeing of motion. In T. Shipley (Ed.), Classics
in psychology (pp. 1032-88).
_____,
(1923/1958). Principles of perceptual organization. In D. Beardslee & M. Wertheimer
(Eds.),
Wheeler,
M., & Atkinson, A. (2001). Domains, brains and evolution.
In D. Walsh (Ed.), Naturalism, evolution and mind (pp.
239-66).
Whiten, A., & Custance, D. (1996). Studies of imitation
in chimpanzees and children. In C. Heyes
& B. Galef (Eds.), Social learning in animals:
The roots of culture (pp. 291-318).
Whiten, A. et al. (1996).
Imitative learning of artificial fruit processing in children (Homo sapiens)
and chimpanzees (Pan troglodytes). Journal of Comparative Psychology,
110, 3-14.
Whiten, A. et al. (1999). Cultures
in chimpanzees. Nature, 399, 682-5.
Wiener, N. (1948). Cybernetics, or control and communication in the animal and the
machine.
Williams, R. (1988). Toward a theory of reinforcement-learning connectionist systems.
Technical Reprt NU-CCS-88-3,
Northeastern University.
Wright, A. (1989). Memory processing by pigeons, monkeys, and people. Psychology
of Learning and Motivation, 24, 25-70.
_____,
(1997).
Memory of auditory lists by rhesus monkeys (Macaca
mulatta). Journal of Experimental Psychology:
Animal Behavior Processes, 23, 441-9.
_____, (1998). Towards designing
artificial networks by evolution. Applied Mathematics and Computation,
91, 83-90.
Yando, R.,
Seitz, V., & Ziqler, E. (1978). Imitation:
A developmental perspective.
Zental, T. et
al. (1990). Memory strategies in pigeons’ performance
of a radial-arm-maze analog task. Journal of Experimental Psychology:
Animal Behavior Processes, 16, 358-71.
Zornetzer, S.,
Davis, J., & Lau, C. (Eds.). (1990). An
introduction to neural and electronic networks.