The
Internet Supermind and Beyond
Ben Goertzel &
Stephan Vladimir Bugaj
July 2000
Nearly everyone who has seriously thought about the evolution of
technology over the next few hundred years has come to the same conclusion: We
live at a crucial point in history – an incredibly exciting and frightening
point; a point that is stimulating to the point of excess, intellectually,
philosophically, physically and emotionally.
A number of really big technologies are brewing. Virtual reality, which lets us create synthetic
worlds equal in richness to the physical world, thus making the Buddhist maxim
“reality is illusion” a palpable technical fact. Biotechnology, allowing us to modify our bodies in various ways,
customizing our genes and jacking our brains, organs and sense organs into
computers and other devices.
Nanotechnology, allowing us to manipulate molecules directly, creating
biological, computational, micromechanical, and other kinds of systems that can
barely be imagined today. Artificial
intelligence, enabling mind, intelligence and reason to emerge out of computer
systems – thinking machines built by humans.
And advances in unified field theory in physics will in all likelihood
join the party, clarifying the physical foundation of life and mind, and giving
the nanotechnologists new tricks no one has even speculated about yet.
Even immortality is not completely out of the
question. As Eric Drexler argued in
Engines of Creation, nano-scale robots could swarm through your body repairing
your aging cells. Or, as Hans Moravec
depicted in his classic book Mind Children, brain scan technology combined with
AI could have us uploading our minds into computers once our bodies wear
down. It sounds like science fiction,
but it’s entirely scientifically plausible: these would be immense engineering
achievements, but wouldn’t violate any known scientific laws. A lot of seemingly impossible things will
soon be possible.
Bill Joy, Chief Scientist of Sun Microsystems, one of
the leading companies in the current phase of the tech revolution, recently
wrote an article in Wired painting this same kind of future, but with a
markedly dystopian bent. He believes
all this amazing technological development will happen, and he finds it
intensely scary. It’s difficult to
blame him, actually. The potential for
abuse of such technologies is obvious.
We have to hope that an ethical evolution comparable to the
technological evolution will occur at the same time, beautifully
synchronized. This is essentially what
Ray Kurzweil foresees in The Age of Spiritual Machines. Kurzweil think its all going to happen in
the next 20 years. By comparison, we
consider ourselves realists: we think it may take 50 or so, although in 20
years we’ll certainly have moved a long way from where we are now.
The Internet is part of this heady mix. It’s a low-tech virtual reality itself,
sucking more and more of our time and attention away from the physical
world. It’s both a brain and a
perceptual world for artificial intelligence systems. And it’s a tool for accelerating technical progress in every
possible direction, enabling unprecedentedly efficient communication and
information retrieval.
This evolving network of long-term possibilities is
wonderful and amazing to think about, but, on the other hand, it’s almost too
big for any one mind or small group of minds to grapple with. Imagine a bunch of pre-linguistic
proto-humans trying to comprehend what the advent of language was going to do
to them. That’s basically the kind of
situation we’re in! Nevertheless, in
spite of the difficulty intrinsic in foresight and the impossibility of
planning any kind of revolution in advance, least of all the technological and
psychocultural kind, there’s something we can do beside sit back and watch as
history leads us on. We can focus on
particular aspects of the revolution we’re wreaking, understanding these
aspects as part of the whole, and also focusing hard on the details,
remembering that, historically, some of the subtlest and most profound general
humanistic insights have come from dealing with very particular issues.
What the authors of this article have been working on,
for the past few years, is what we believe will be the first component of the
emerging tech revolution to fall fully into place: artificial intelligence.
Of course, we realize this is a gutsy statement. The field of artificial intelligence isn’t
all that fashionable these days; and this is understandable enough. AI’s biggest claim to fame isn’t any of its
particular achievements – beating Kasparov at chess, diagnosing diseases better
than doctors, mastering integral calculus -- but rather its incredibly
persistent habit of over-promising and under-delivering. This is always fatal in the business world,
and not very favorable among academic circles either.
But, over the last decade, we’ve studied this history
carefully, along with the theoretical foundations of AI, and the practical
aspects of implementing intelligence on modern computers. And we believe that AI is finally ready to
outgrow its history of big brags and false starts. AI’s time has finally come.
In fact, we’ll venture even further out on our
limb. Within the next few years, we
believe, there will emerge the first real AI systems. The Webmind system that we’re building at Intelligenesis Corp.
will be one of them, but there may well be others. And within the next few decades this technological advance will
induce fundamental changes in the human condition, transforming the way we view
ourselves as thinking beings, the way we interact with each other through
electronic communication networks -- the way we work, the way we feel, the way
we live.
Of course, AI is only part of the huge transformation
we humans are bringing down on ourselves.
Eventually AI will cease to exist as a discipline in itself, becoming
diffused in the general matrix of technological innovation. Every new technology will be intelligent in
one way or another, and AI will inform nanotech, biotech, virtual reality,
unified physics, chemistry, refrigerators, toasters – you name it. But everything has to start somewhere. Life originally started with a few thousand
molecules huddling together, perhaps inside a water droplet, developing
primitive reproductive and metabolic abilities. Life has gone far beyond that now, growing to encompass things as
complex and fantastic as you and me.
But the simplest incarnations of life still have something to teach us,
because they share so many properties with what has grown from them. Like proto-cells in the primordial soup, the
AI programs of the next decade will be the first, primitive, early versions of
a whole new order of being. Viewing
them with enough imagination, one will be able to get at least a murky, muddled
glimpse of what’s awakening within and around us.
The ongoing acceleration of AI development is obvious
from technical advances in the Internet industry over the last few years, if
one takes the time to study it. Take
the case of search engines for example – or in the lingo of computer science
insiders, “information retrieval” tools.
At the tail end of the technological race for intelligent information
retrieval, one has standard search engines like Alta Vista and Yahoo. These have basically no intelligence
whatsoever. They rely on pure bulk of
information. Then you have moderately
intelligent search tools made by firms like LexiQuest and Autonomy, that apply
various specialized algorithms to grasp something of the meaning of texts. This is where things have stood for a long
time. But then, over the last year,
something new has arisen: A dozen or so start-up firms have come to prominence
with technology that goes beyond this, and tries to understand text in a more
thorough and flexible way, building subtle “semantic maps” describing of the
meanings of documents. There are
shortcomings in all this work: Talavara, for example, has one of the best AI
search systems around, but it focuses too much on the syntactic analysis of
documents, and uses a semantic map that isn’t nearly as flexible as the
corresponding structures in the human mind.
Similarly, WorldFree’s Know-All product does a good job of answering a
variety of questions, but it uses an overly rigid “ontology” for representing
knowledge – a fixed set of categories, not a flexible one like exists in the
human mind.
None of these firms have really come to grips with the
problem of reasoning on the semantic maps that their AI systems glean from
reading text. But this is the next
step. Over the next two years, you’ll
likely see firms coming out with sophisticated reasoning engines hooked up to
their syntax understanding and semantic mapping engines. Then, a couple years after that, people
will realize that reason isn’t enough – that you need intuition as well – and
the problem of synthesizing reason and intuition in a single flexible adaptive
intelligent system will become paramount.
And so on. Our Webmind system
provides a way of leapfrogging much of this incremental development, because
it’s been designed up-front with a synergetic model of all the mind’s
functions, rather than incrementally adding functions based on competitive
business needs. However, the main point
where the evolution of AI is concerned isn’t the coolness of Webmind or any
other particular AI system – it’s that we now have practical commercial
problems, such as Web search, that are being solved by AI technologies; and
that because of this, AI technologies are improving at a fever pitch. Even the major search engine companies – the
technology dinosaurs of the Internet business -- have started to jump on board,
with AltaVista releasing an AI-supercharged smart search site last month. It isn’t all pie in the sky anymore. The PR departments of these various firms
have a way of avoiding the word “AI”, but that’s exactly what it is. Of course, it isn’t real AI yet – not truly
adaptive, flexible, self-aware intelligence -- but the path from current AI
products to a real thinking computer program isn’t all that difficult to map
out, if you have a feel for the terrain.
What has made this onslaught of AI improvements
possible? AI researchers haven’t gotten
any more brilliant … and the technical teams of these various AI Internet firms
don’t tend have any profound new insights into digital mind. Rather, they’re mostly implementing ideas
that have been around in academia for quite a while. What’s sparked the current burst of development is, quite
simply, hardware -- the tremendous accelerations in computer hardware
that have occurred over the past two decades. There have been vague theories on how to make AI work for a long
time, but without hardware adequate for implementing the theories, there was
little incentive to make them precise and work out all the details. Now the hardware is there, and what isn’t there
will be there in a few years. Computers
with gigabytes of RAM cost only a few thousand dollars, and networking
technology lets anyone build their own distributed supercomputer. We take for granted that we need to buy a
new computer every two years because the old ones so rapidly get useless – but
think about how amazing that is! What
if the same were true of cars or refrigerators or musical instruments?
This hardware acceleration won’t bring us biotech or
nanotech or unified physics; it won’t even bring us virtual reality, which
requires much better human sensory interfaces than we have today. But it will bring us artificial
intelligence. And AI, once we have it,
will help launch the other component technologies of the emerging tech
revolution. AI will help accelerate biotech, by helping us to understand how
the DNA sequences mapped out in the Human Genome Project actually combine to
give rise to self-organizing organisms like you and me. Biotech will allow us to jack computers into
our bodies in new ways, enabling truly visceral virtual reality
experiences. Eventually maybe AI’s will
be pressed into service solving the numerous hard engineering problems required
to make nanotech actually work.
Everything will fall into place during the next 100 years or so; but we
believe that AI will happen first.
This makes thinking carefully about AI a high
priority. And the best tool we know for
thinking about the nature of AI systems -- and the series of technological and
psychocultural effects these systems are fated to unleash -- is the idea of the
Metasystem Transition. This notion has
a long history in the philosophy of many cultures, but it was most clearly
crystallized and formulated in the work of the extraordinary Russian
philosopher-scientist Valentin Turchin.
A metasystem transition, as Turchin defines it, is a point in the
history of evolution of a system where the whole comes to dominate the
parts.
According to current physics theories, there was a
Metasystem Transition in the early universe, when the primordial miasma of
disconnected particles cooled down and settled into atoms. All of a sudden the atom was the thing, and
individual stray particles weren’t the key tool to use in modeling the universe. Once particles are inside atoms, the way to
understand what particles are doing is to understand the structure of the
atom. And then there was another
transition from atoms to molecules, leading to the emergence, within the first
few instants after the Big Bang, of the Periodic Table of Elements.
There was a metasystem transition on earth around four
billion years ago, when the steaming primordial seas caused inorganic chemicals
to clump together in groups capable of reproduction and metabolism. Unicellular life emerged, and once chemicals
are embedded in life-forms, the way to understand them is not in terms of
chemistry alone, but rather, in terms of concept like fitness, evolution, sex,
and hunger. Concepts like desire and
intention are not far off, even with paramecia: Does the paramecium desire its
food? Maybe not … but it comes a lot
closer than a rock does to desiring to roll down a hill….
And there was another metasystem transition when
multicellular life burst forth – suddenly the cell is no longer an autonomous
life form, but rather a component in a life form on a higher level. The Cambrian explosion, immediately
following this transition, was the most amazing flowering of new patterns and
structures ever seen on Earth – even we humans haven’t equaled it yet. 95% of the species that arose at this time
are now extinct, and paleontologists are slowly reconstructing them so we can
learn their lessons.
Note that the metasystem transition is not an
antireductionist concept, in the strict sense.
The idea isn’t that multicellular lifeforms have cosmic emergent
properties that can’t be explained from the properties of cells. Of course, if you had enough time and
superhuman patience, you could explain what happens in a human body in terms of
the component cells. The question is one
of naturalness and comprehensibility, or in other words, efficiency of
expression. Once you have a
multicellular lifeform, it’s much easier to discuss and analyze the properties
of this lifeform by reference to the emergent level than by going down to the
level of the component cells. In a
puddle full of paramecia, on the other hand, the way to explain observed
phenomena is usually by reference to the individual cells, rather than the
whole population – the population has less wholeness, fewer interesting
properties, than the individual cells.
In the domain of mind, there are also a couple levels
of metasystem transition. The first one
is what we call the emergence of “mind modules.” This is when a huge collection of basic mind components – cells,
in a biological brain; “software objects” in a computer mind – all come
together in a unified structure to carry out some complex function. The whole is greater than the sum of
the parts: the complex functions that the system performs aren’t really
implicit in any of the particular parts of the system, rather they come out of
the coordination of the parts into a coherent whole. The various parts of the human visual system are wonderful
examples of this. Billions of cells
firing every which way, all orchestrated together to do one particular thing:
map visual output from the retina into a primitive map of lines, shapes and
colors, to be analyzed by the rest of the brain. The best current AI systems are also examples of this. In fact, computer systems that haven’t
passed this transition I’d be reluctant to call “AI” in any serious sense.
There are some so-called AI systems that haven’t even
reached this particular transition – they’re really just collections of rules,
and each behavior in the whole system can be traced back to one particular
rule. But consider a sophisticated
natural language system like LexiQuest – which tries to answer human questions,
asked in ordinary language, based on information from databases or extracted
from texts. In a system like this, we
do have mind module emergence. When the
system parses a sentence and tries to figure out what question it represents,
it’s using hundreds of different rules for parsing, for finding out what
various parts of the sentences mean.
The rules are designed to work together, not in isolation. The control parameters of each part of the
system are tuned so as to give maximal overall performance. LexiQuest isn’t a mind, but it’s a primitive
mind module, with its own, albeit minimal, holistic emergence. The same is true of other current
high-quality systems for carrying out language processing, computer vision,
industrial robotics, and so forth. For
an example completely different from LexiQuest, look at the MIT autonomous
robots built under the direction of Rodney Books. These robots seem to exhibit some basic insect-level
intelligence, roaming around the room trying to satisfy their goals, displaying
behavior patterns that surprise their programmers. They’re action-reaction modules, not minds, but they have
holistic structures and dynamics all their own.
On roughly the same level as LexiQuest and Brooks’
robots, we find computational neural networks, which carry out functions like
vision or handwriting recognition or robot locomotion using hundreds up to
hundreds of thousands of chunks of computer memory emulating biological
neurons. As in the brain, the
interesting behavior isn’t in any one neuron, it’s in the whole network of
neurons, the integrative system. There
are dozens of spin-offs from the neural network concepts, such as the Bayesian
networks used in products like Autonomy and the Microsoft Help system. Bayesian
networks are networks of rules capable of making decisions such as "If the
user asks about ‘spreadsheet’, activate the Excel help system". The
programmer of such a system never enters a statement where the rule "if
the word spreadsheet occurs, activate the help system" appears -- rather
this rule emerges from the dynamics of the network. However, the programmer sets up the network in a way that fairly
rigidly controls what kinds of rules can emerge. So while the system can discover new patterns of input behavior
that seem to indicate what actions should be taken, it is unable to discover new
kinds of actions which can be taken – that is, it can only discover new
instances of information, not new types of information. It’s not autonomous, not alive.
Each of the modules of our Webmind AI system has
roughly the same level of sophistication as one of these bread-and-butter AI
programs. Webmind has modules that carry out reasoning, language processing,
numerical data analysis, financial prediction, learning, short-term memory, and
so forth. Webmind’s modules are all
built of the same components, Java software objects called “nodes” and “links”
and “wanderers” and “stimuli.” They
arrange these components in different ways, so that each module achieves its
own emergent behavior, each module realizing a metasystem transition on its
own.
But mind modules aren’t real intelligence, not in the
sense that we mean it: Intelligence as the ability to carry out complex goals
in complex environments. Each mind
module only does one kind of thing, requiring inputs of a special type to be
fed to it, unable to dynamically adapt to a changing environment. Intelligence itself requires one more
metasystem transition: the coordination of a collection of mind modules into a
whole mind, each module serving the whole and fully comprehensible only in the
context of the whole. This is a domain
that AI research has basically not confronted yet – it it’s not mere egotism to
assert that the Webmind system is almost unique in this regard. It takes a lot of man-hours, a lot of
thinking, and a lot of processing power to build a single mind module, let
alone to build a bunch of them – and even more to build a bunch of them in such
a way as to support an integrative system passing the next metasystem
transition. We’re just barely at the
point now, computer-hardware-wise, that we can seriously consider doing such a
thing. But even being just barely
there is a pretty exciting thing.
Webmind allows the interoperation of these
intelligent modules within the context of a shared semantic representation –
nodes, links and so forth. Through the
shared semantic representation these different intelligent components can
interact and thus evolve a dynamical state which is not possible within any one
of the modules. Like a human brain,
each specialized sub-system is capable of achieving certain complex perceptual
(such as reading a page of text) or cognitive (such as inferring causal
relations) goals which in themselves seem impressive - but when they are
integrated, truly exciting new possibilities emerge. Taken in combination, these intelligent modules embodying
systems such as reasoning, learning and natural language processing, etc.
undergo a metasystem transition to become a mind capable of achieving complex
goals worthy of comparison to human abilities. The resulting mind can not be described merely as a pipeline of
AI process modules, rather it has its own dynamical properties which emerge
from the interactions of these component parts, creating new and unique
patterns which were not present in any of the sub-systems. The Webmind system isn’t complete yet – the
first complete version will be launched in mid 2001 – but we’ve created various
products based on various simple combinations of the modules: a text
categorization application, a market prediction tool, a search engine,
etc. Each of these systems exhibits a lower-level
of metasystem transition, but each is a building-block in creating the emergent
whole of the actual Webmind.
Such a metasystem transition from modules
to mind is a truly exciting emergence.
A system such as Webmind can autonomously adapt to changes in more
complex environments than their single-module predecessors, and can be trained
in a manner which is more like training a human than programming a
computer. This kind of a system
theoretically can be adapted to any task for which it is able to perceive
input, and while the initial Webmind system operates an a world of text and
numerical files only, integrating it with visual and auditory systems, and
perhaps a robot body, would allow it to have some facility to perform in the
physical world as well. Applications of
even the text and data constrained system are quite varied and exciting, such
as autonomous financial analysis, conversational information retrieval, true
knowledge extraction from text and data, etc.
While there are other systems that can find
some interesting patterns in input data, a mind can determine the presence of
previously unknown types of patterns and make judgments that are outside the
realm of previous experience. An
example of this can be seen in financial market analysis. Previously unknown market forces, such as
the Internet, can impact various financial instruments in ways which prevent
successful trading using traditional market techniques. A computer mind can detect this new pattern
of behavior, and develop a new technique based on inferring how the current
situation relates to, but also differs from, from previous experience. The Webmind market predictor already does
this, to a limited extent, through the emergence of new behaviors from the
integration of only a few intelligent modules.
As more modules are integrated the system becomes more intelligent. Currently the Webmind market predictor can
create trading strategies in terms of long, short, and hold positions on
instruments, detect changes in the market environment (using both numerical
indicators and by reading news feeds), and develop new strategies based on
these changes.
For another short-term, real-world example
of the promise of computational mind, let’s return to the area of information
retrieval. What we really want isn’t a
search engine – we want a digital assistant, with an understanding of context
and conversational give-and-take like a human assistant provides. AskJeeves tries to provide this, but
ultimately it’s just a search engine/ chat-bot hybrid. It’s amusing enough, but quite far from the
real possibilities in this area. A
mind-based conversational search tool, as will be possible using the completed
Webmind system, will be qualitatively different. When an ambiguous request is made of a mind, it does not blindly
return some information pulled out of a database; a mind asks questions to
resolve ambiguous issues, using its knowledge of your mind as well as the
subject area to figure out what questions to ask. When you ask a truly intelligent system “find me information
about Java”, it will ask back a question such as “do you want information about
the island, the coffee, or the computer programming language?” But if it knows you’re a programmer, it
should ask instead “Do you want to know about JVM’s or design patterns or
what?” Like a human, a machine which
has no exposure to the information that there is an island called Java, for
example, might only ask about coffee and computers, but the ability to make a
decision to resolve the ambiguity in the first place, in a context-appropriate
way, is a mark of intelligence. An
intelligent system will use its background knowledge and previous experience to
include similar information (Java, J++, JVM, etc.), omit misleading information
(JavaScript, a totally different programming language from Java), and analyze
the quality of the information.
Information retrieval segues into information creation, when a program
infers new information by combining the information available in the various documents
it reads, providing users with this newly created information as well as
reiterating what humans have written.
These practical applications
are important, but it’s worth remembering that the promise of digital mind goes
beyond these simple short-term considerations.
Consider, for example, the fact that digital intelligences have the
ability to acquire new perception systems during the course of their
lives. For instance, an intelligent
computer system to be attached to a bubble chamber and given the ability to
directly observe elementary particle interactions. Such a system could greatly benefit particle physics research, as
the system would be able to think directly about the particle world, without having
to resort to metaphorical interpretations of instrument readings as humans must
do. Similar advantages are available to
computers in terms of understanding financial and economic data, and
recognizing trends in vast bodies of text.
This metasystem transition – from mind modules to mind
– is the one that the authors have spent most of our time thinking about during
the last couple years. But it’s by no
means the end of the story. When
Turchin formulated the metasystem transition concept, he was actually thinking
about something quite different – the concept of the global brain, an emergent
system formed from humans and AI systems both, joined together by the Internet
and other cutting-edge communication technologies. It’s a scary idea, and a potent one. Communication technology makes the world smaller each day – will
it eventually make it so small that the network of people has more intrinsic
integrity than any individual person?
Shadows of the sci-fi notion of a “hive mind” arise here… images of the
Borg Collective from Star Trek. But
what Turchin is hoping for is something much more benign: a social structure
that permits us our autonomy, but channels our efforts in more productive
directions, guided by the good of the whole.
Interestingly, Turchin himself is somewhat pessimistic
about the long-term consequences of all this, but not in quite the alarmist
vein of Bill Joy -- more in the spirit of a typically Russian ironic doubt in
human nature. In other words, Bill Joy
believes that high tech may lead us down the road to hell, so we should avoid
it; whereas Turchin sees human nature itself as the really dangerous thing,
leading us to possible destruction through nuclear, biological, or chemical
warfare, or some other physical projection of our intrinsic narrow-mindedness
and animal hostility. He hopes that
technological advancement will allow us to overcome some of the shortcomings of
human nature and thus work toward the survival and true mental health of our
race. Through his Principia Cybernetica
project, co-developed with Francis Heylighen (of the Free University of
Brussels) and Cliff Joslyn (of Los Alamos National Labs in the US), he’s sought
to develop a philosophical understanding to go with the coming tech revolution,
grounded on the concept of the metasystem transition. As he says, the goal with this is “to develop -- on the basis of the current state of
affairs in science and technology -- a complete philosophy to serve as the
verbal, conceptual part of a new consciousness.” But this isn’t exactly being done with typical American
technological optimism. Rather, as
Turchin puts it, “My optimistic
scenario is that a major calamity will happen to humanity as a result of the
militant individualism; terrible enough to make drastic changes necessary, but,
hopefully, still mild enough not to result in a total destruction. Then what we
are trying to do will have a chance to become prevalent. But possible solutions must be carefully
prepared.”
With this in mind, it’s interesting to note that over
the last couple years, Turchin has devoted most of his time to a highly
technical but extremely important aspect of the technological revolution:
making computer programs run faster. He
now lives in New Jersey, and together with his friends Yuri Mostovoy and Andrei
Klimov, has started a company Supercompilers LLC, based in New Jersey and
Moscow. Admittedly it’s a bit odd for
a Russian academic in his 70’s to be masterminding an Internet company, but
that’s the wonderful thing about our time: the bizarre quickly becomes routine,
accelerating innovation, and opening eyes and minds. Turchin is building a “supercompiler” that will enable Java
programs to run 10 to 100 times faster than they normally do now, and use less
memory as well. It’s a wonderful piece
of technology, that works, in a sense, by recognizing metasystem transitions
inside software itself, and using them to improve software performance. It could only have been developed in Russia,
where hardware advances were slow and everyone was always using inefficient,
obsolete computers -- thus ingenious methods for speeding up programs were
highly worthwhile. The importance of
this kind of work for the future of AI and the Internet in general cannot be
underestimated. Right now the first
supercompiler is probably a year from completion; and in the couple years
following that, the supercompiler will likely be hybridized with Webmind,
yielding an intelligent computer program that continually rewrites its own code
-- as if human beings could continually optimize their own DNA in order to
improve their own functionality and the nature of their offspring. Oh brave new world that has such programs in
it! And, brave new business environment
that allows such projects to be funded in the world of commerce, thus
accelerating development far beyond what it would be if they had to proceed at
the snail’s pace of academic research.
As we see it, the path from the Net that we have today
to the global brain that envelops humans and machines in a single overarching
superorganism involves not one but several metasystem transitions. The first one is the emergence of the global
web mind – the transformation of the Internet into a coherent organism. Currently the best way to explain what
happens on the Net is to talk about the various parts of the Net: particular
Websites, e-mail viruses, shopping bots, and so forth. But there will come a point when this is no
longer the case, when the Net has sufficient high-level dynamics of its own
that the way to explain any one part of the Net will be by reference to the
whole. This will come about largely
through the interactions of AI systems – intelligent programs acting on the
behalf of various Websites, Web users,
corporations, and governments will interact with each other intensively,
forming something halfway between a society of AI’s and an emergent mind whose
lobes are various AI agents serving various goals. The traditional economy will be dead, replaced by a chaotically
dynamical hypereconomy (a term coined by the late transhumanist theorist
Alexander Chislenko) in which there are no intermediaries except for
information intermediaries: producers and consumers (individually or in large
aggregates created by automatic AI discovery of affinity groups) negotiate
directly with each other to establish prices and terms, using information
obtained from subtle AI prediction and categorization algorithms. How far off this is we can’t really tell,
but it would be cowardly not to give an estimate: we’re betting no more than 10
years.
The advent of this system will be gradual. Initially when only a few AI systems are
deployed on the Web, they will be individual systems which are going to be
overwhelmed with their local responsibilities.
As more agents are added to the Net, there will be more interaction
between them. Systems which specialize
will refer questions to each other. For
example, a system that specialized in (had a lot of background knowledge and
evolved and inferred thinking processes about) financial analysis may refer
questions about political activities to political analyst systems, and then
combine this information with its own knowledge to synthesize information about
the effects of political events on market activity. This hypereconomic system of Internet agents will dynamically
establish the social and economic value of all information and activities
within the system, through interaction amongst all agents in the system. As these interactions become more complex,
agent interconnections become more prevalent and more dynamic, and agents
become more interdependent the network will become more of a true shared
semantic space: a global integrated mind-organism. Individual systems will start to perform activities which have
no parallel in the existing natural world.
One AI mind will directly transfer knowledge to another by literally
sending it a “piece of its mind”; an AI mind will be able to directly sense
activities in many geographical locations and carry on multiple
context-separated conversations simultaneously; a single global shared-memory
will emerge allowing explicit knowledge sharing in a collective
consciousness. Across the millions,
someday billions, of machines on the Internet, this global Web mind will
function as a single collective thought space, allowing individual agents to
transcend their individual limitations and share directly in a collective
consciousness, extending their capabilities far beyond their individual
means.
All this is fabulous enough – collective consciousness
among AI systems; the Net as a self-organizing intelligent information
space. But yet, it’s after this
metasystem transition – from Internet to global hypereconomic Web mind -- that
the transition envisioned by Turchin and his colleages at Principia Cybernetica
can take place: the effective fusion of the global Web mind and the humans
interacting with it. It will be very
interesting to see where biotech-enabled virtual reality technology is at this
point. At what point will we really be
jacking our brains into the global AI matrix, as in Gibson’s novel
Neuromancer? At what point will we
supercompile and improve our own cognitive functions, or be left behind by our
constantly self-reprogramming AI compatriots?
But we don’t even need to go that far.
Putting these more science-fictional possibilities aside and focusing
solely on Internet AI technology, it’s clear that more and more of our
interactions will be mediated by the global emergent intelligent Net – every
appliance we use will be jacked into the
matrix; every word that we say potentially transmitted to anyone else on
the planet using wearable cellular telephony or something similar; every
thought that we articulate entered into an AI system that automatically
elaborates it and connects it with things other humans and AI agents have said
and thought elsewhere in the world – or things other humans and AI agents are expected
to say based on predictive technology….
The Internet Supermind is not the end of the story – it’s only the
initial phase; the seed about which will crystallize a new order of mind,
culture and technology. Is this going
to be an instrument of fascist control, or factional terrorism? It’s possible, but certainly not inevitable
– and the way to avoid this is for as many people as possible to understand
what’s happening, what’s likely to happen, and how they can participate in the
positive expansion of this technology.
Imagine: human and machine identities joined into the
collective mind, creating a complex network of individuals from which emerges
the dynamics of a global supermind, with abilities and boundaries far greater
than would be possible for any individual mind, human or artificial – or any
community consisting of humans or AI’s alone.
As Francis Heylighen has said, “Such a global brain will function as a nervous
system for the social superorganism, the integrated system formed by the whole
of human society.” Through this global
human-digital emergent mind, we will obtain a unique perspective on the world,
being able to simultaneously sense and think in many geographical locations and
potentially across many perceptual media (text, sound, images, and various
sensors on satellites, cars, bubble chambers, etc.) The cliché “let’s put
our minds together on this problem” will become a reality, allowing people and
machines to pool their respective talents directly to solve tough problems in
areas ranging from theoretical physics to social system stabilization, and to
create interesting new kinds of works in literature and the arts.
Weird? Scary? To be sure. Exciting? Amazing? To be sure, as well. Inevitable? An increasing number of techno-visionaries think so. Some, like Bill Joy, have retreated into neo-Ludditism, believing that technology is a big danger and advocating careful legal control of AI, nanotech, biotech and related things. Turchin is progressing ahead as fast as possible, building the technology needed for the next phase of the revolution, careful to keep an eye on the ethical issues as he innovates, hoping his pessimism about human nature will be proved wrong. As for us, we tend to be optimists. Life isn’t perfect, plants and animals aren’t perfect, humans aren’t perfect, computers aren’t perfect – but yet, the universe has a wonderful way of adapting to its mistakes and turning even ridiculous errors into wonderful new forms.
The dark world of tyranny and fear described in the writings
of cyberpunk authors like William Gibson and Bruce Sterling, and in films such
as The Matrix and Blade Runner, is certainly a possibility. But there’s also the possibility of less
troubling relationships between humans and their machine counterparts, such as
we see in the writings of transrealist authors like Rudy Rucker and Stanislaw
Lem, and in film characters like Star Trek’s Data and Star Wars’ R2-D2 and
C3P0. We believe, through ethical
treatment of humans, machines, and information, that a mutually beneficial
human-machine union within a global society of mind can be achieved. The ethical and ontological issues of
identity, privacy, and selfhood are every bit as interesting and challenging as
the engineering issues of AI, and we need to avoid the tendency to set them
aside because they’re so difficult to think about. But these things are happening – right now we’re at the
beginning, not the end, of this revolution; and the potential rewards are
spectacular -- enhanced perception, greater memory, greater cognitive capacity,
and the possibility of true cooperation among all intelligent beings on
earth.
One might say, flippantly, ”Hold on tight, humanity --
you’ve built this great rollercoaster, and now you’re in for one hell of a
ride!” But from an even bigger
perspective, we’ve been riding the rollercoaster of universal evolution all
along. The metasystem transitions that
gave rise to human bodies and human intelligence were important steps along the
path, but there will be other steps, improving and incorporating humanity and
ultimately going beyond it.