Ben Goertzel
(A talk presented at Transvision 2006, the annualconference of the World Transhumanist Association, held in Helsinki Finland inAugust 2006)
Since this is a conferenceof transhumanists, IÕm going to assume youÕre all familiar with the concept ofthe Singularity, as developed by Vernor Vinge, and popularized by Ray Kurzweiland others.
The Singularity is supposedto be a moment – at some point in the future – when advances inscience and technology start occurring at a rate that is effectively infinitecompared to the processing speed of the human mind.
ItÕs a point at whichintelligence way beyond human capability comes into play, and transforms theworld in ways we literally canÕt imagine.
This is a scary idea, and anexciting one.
The Singularity could be theend of us all.
Or -- it could be thefulfillment of all our dreams.
Ray Kurzweil, in his book TheSingularity Is Near, has made apretty careful argument that the Singularity actually is coming, and is goingto come sometime around the middle of the century. HeÕs drawn a lot of exponential curves, showing the rate ofadvance of various aspects of science and technology, reaching toward infinityin the period from 2040 to 2050. He believes that when it comes, the Singularity will make all our livesbetter. He sees humans becomingintegrated with various technologies – enhancing our brains and bodies,living as long as we want to, and fusing and networking with powerful AI minds.
On the other hand, Hugo deGaris, in his book The Artilect War,has shown a rather different kind of graph: a graph of the number of peoplewhoÕve died in different wars throughout history. This number also increases exponentially.
ThatÕs the paradox of theSingularity – itÕs our greatest dream and our worst nightmare, rolled upinto one.
Kurzweil, with hiscurve-plotting, positions the Singularity around 2050.
But thereÕs a lot ofevidence showing how unreliable any curve-plotting is, regarding complexevents. I think Ray and Hugo havemade reasonable arguments – but I can also see ways it could take a lotlonger -- or a lot shorter -- than they think it will.
How could it take a lotlonger? What if terrorists nukethe major cities of world? What ifanti-technology religious fanatics take over the worldÕs governments?
How could it take a lot lesstime? If the right people focustheir attention on the right things.
What IÕm going to tell youin this talk is why I think itÕs possible to create a positive Singularitywithin the next ten years.
Why ten years?
ItÕs a nice roundnumber. Just like most of you, Ihave ten fingers on my hands. Icould have said eight or thirteen instead.
But I think ten years– or something in this order of magnitude – could really beachievable. Ten years to apositive Singularity.
Before getting started onthe Singularity, I want to tell you a story...
Thisis a fairly well known story, about a guy named George Dantzig (no relation tothe heavy metal singer Glenn Danzig!). Back in 1939, Danzig was studying for his PhD in statistics at theBerkeley. He arrived late forclass one day and found two problems written on the board.
HereÕswhat Dantzig said about the situation: ÒIf I had known that the problem werenot homework but were in fact two famous unsolved problems in statistics, Iprobably would not have thought positively, would have become discouraged, andwould never have solved them.Ó
Dantzigsolved these problems because he thought they were solvable – he thoughtother people had solved them. Hethought everyone else in his class was going to solve them.
ThereÕsa lot of power in expecting to win. Athletic coaches know about the power of streaks.
Totake another example, look at the Manhattan Project. America thought they needed to create nuclear weapons beforethe Germans did. They assumed itwas possible – and they felt a huge burning pressure to get therefirst. Unfortunately, what theywere working on so hard, with so much brilliance, was an ingenious method forkilling a lot of people! But,whatever you think of the outcome, thereÕs no doubt the pace of innovation inscience and technology in that project was incredible.
What if we knew it waspossible to create a positive Singularity in ten years?
What if we thought everyoneelse in the class knew how to do it already?
What if we were worried thebad guys were going to get there first?
Under this assumption, howthen would we go about trying to create a positive Singularity?
Look at the futurist technologiesaround us
Which ones have the mostlikelihood of bringing us a positive Singularity within the next ten years?
ItÕs obviously AI.
Nano and bio and roboticsare all advancing fast, but they all require a lot of hard engineeringwork.
AI requires a lot of hardwork too -- but itÕs a softer kind of hard work. Creating AI relies only on human intelligence -- not onpainstaking and time-consuming experimentation with physical substances and biologicalorganisms.
With AI, we just have tofigure out how to write the right program, and weÕre there.
But how can we get toAI? There are two bigpossibilities:
Both approaches seemviable. But the first approach hasa problem. Copying the human brainrequires way more understanding of the brain than we have now.
So weÕre left with the otherchoice – come up with something cleverer. Figure out how to make a thinking machine – using allthe sources of knowledge at our disposal: computer science and cognitivescience and philosophy of mind and mathematics and cognitive neuroscience andso forth.
Well, this is what IÕve beenworking on for the last 20 years or so. IÕve done some programming and some mathematical calculation – andIÕve studied a lot of science and technology and philosophy – but morethan anything IÕve thought about the problem.
My conclusion is that thereare a lot of ways to create a mind. Human brains give rise to one kind of mind -- and not such a great kindreally. If you think about it froma big picture perspective, we humans are really kind of stupid.
So if itÕs not thatincredibly difficult, why donÕt have AIÕs smarter than people right now?
The main reason we donÕthave real AI right now is that almost no one has seriously worked on theproblem. And the ones that have,have thought about it in the wrong way.
Some people have thoughtabout AI in terms of copying the brain – but that means you have to waittill the neuroscientists have finished figuring out the brain, which is nowherenear happening. Trying to make AIbased on our current, badly limited understanding of the brain is a recipe forfailure. We have no understandingyet of how the brain represents or manipulates abstraction.
And the AI scientists whohavenÕt thought about copying the brain, have mostly made another mistake– theyÕve thought like computer scientists. IÕm a math PhD – I was originally trained as amathematician – so I think I understand this. Computer science is likemathematics – itÕs all about elegance and simplicity.
But thatÕs not the way mindswork. The elegance of mathematicsis misleading. The human mind is amess – and not just because evolution creates messy stuff.
Intelligence does include apowerful, elegant, general problem-solving component – some people havemore of it than others. Somepeople I meet seem to have almost none of it at all.
But intelligence also includesa whole bunch of specialized problem-solving components – dealing withthings like vision, socialization, learning physical actions, recognizingpatterns in events over time, and so forth. This kind of specialization is necessary if youÕre trying toachieve intelligence with limited computational resources.
Marvin Minsky has introducedthe metaphor of a society. He saysa mind needs to be a kind of society, with different agents carrying outdifferent kinds of intelligent actions and all interacting with eachother.
But a mind isnÕt really likea society – it needs to be more tightly integrated than that.
And then comes the mostcritical part -- the whole thing needs to turn inwards on itself.
This relates to what thephilosopher Thomas Metzinger calls the Òphenomenal self .Ó
Brain theorists havenÕtunderstood the way the self emerges from the brain yet – brain mappingisnÕt advanced enough.
And computer scientistshavenÕt understood the self – because it isnÕt about computerscience. ItÕs about the emergentdynamics that happen when you put a whole bunch of general and specializedpattern recognition agents together – a bunch of agents created in a waythat they can really cooperate – and when you include in the mix agentsoriented toward recognizing patterns in the society as a whole.
The specific algorithms andrepresentations inside the pattern recognition agents – algorithmsdealing with reasoning, or seeing, or learning actions, or whatever –these algorithms are what computer science focuses on, and theyÕre important,but theyÕre not really the essence of intelligence. The essence of intelligence lies in getting the parts to allwork together in a way that gives rise to the phenomenal self.
Novamente is a bunch ofcomputer science algorithms all wired together – some of the algorithms Iinvented myself, and some I borrowed from others, usually with bigmodifications. But the key pointis that theyÕre wired together in a way that I think can let the whole systemrecognize significant patterns in itself.
When IÕm talking about AI, Iuse the word ÒpatternsÓ a lot, and I think itÕs critical.
Intelligence, I think aboutas the ability to achieve complex goals in complex environments.
A mind is a collection ofpatterns for effectively recognizing patterns. Most importantly a mind needs to recognize patterns aboutwhat actions are most likely to achieve its goals.
The phenomenal self is a bigpattern – and what makes a mind really intelligent is its ability tocontinually recognize this pattern – the phenomenal self – initself.
How Novamente works indetail is a pretty technical story, which IÕm not going to tell you rightnow. IÕll mention the names ofsome of the major parts
But those words donÕt reallytell you anything.
But the point I want to getacross now is the problem that I was trying to solve in creatingNovamente. Not to find the onemagic representation or the one magic algorithm underlying intelligence.
The Novamente design isfairly big. ItÕs not as big as thedesign for Microsoft Word, let alone Windows XP – but itÕs big enoughthat for me to program it by myself would take many decades.
Right now we have a handfulof people working on Novamente full time. What weÕre doing now – as well as building out the basic AI -- isteaching Novamente to be a virtual baby. It lives in a 3D simulation world and tries to learn simple stuff likeplaying fetch and finding objects. ItÕs a long way from there to the Singularity – but thereÕs adefinite plan for getting from here to there.
The current staffing of theproject is not enough. If we keepgoing at this rate, weÕll get there eventually – but we wonÕt have theSingularity in ten years.
But I still think we can doit – IÕm keeping my fingers crossed. We donÕt need a Manhattan Project scale effort, all we needright now is the funding to get a dozen or so of the right people on theproject full time.
IÕve talked more about AIthan about the Singularity or positiveness. Let me get back to those.
It should be obvious that ifyou can create an AI vastly smarter than humans, then pretty much anything ispossible.
Or at least, once we reachthat stage, thereÕs no way for us – with our puny human brains – toreally predict whatÕs possible and what isnÕt. Once the AI has its own self and has superhuman level intelligence,itÕs going to learn things and figure things out on its own.
But what about theÒpositiveÓ part? How do we knowthis AI wonÕt annihilate us all – why wonÕt it decide weÕre a bad use ofmass-energy and repurpose our component particles for something more important?
ThereÕs no guarantee ofthis, of course.
Just like thereÕs noguarantee that some terrorist wonÕt nuke my house tonight.
But there are ways to makebad outcomes unlikely.
The goal systems of humansare pretty unpredictable, but a software mind like Novamente is different– the goal system is better-defined. So one reasonable approach is to make the first Novamente akind of Oracle. Give it a goalsystem with one top-level goal: To answer peoplesÕ questions, in a way thatÕsdesigned to give them maximum understanding.
If the AI is designed not tochange its top-level goal -- and its top-level goal is to sincerely andusefully answer our questions -- then the path to a positive Singularity seemsclear.
The risk of course is thatit changes its goals, even though you programmed it not to.
So – a positiveSingularity in 10 years?
Am I sure itÕspossible? Of course not.
But I do think itÕsplausible.
And I know this:
And if we assume it is
There may be many ways tocreate a positive Singularity in ten years. The way IÕve described to you – the AI route –is the one that seems clearest to me. There are six billion people in the world so thereÕs certainly room totry out many paths in parallel.
But unfortunately the humanrace isnÕt paying much attention to this sort of thing.
I find the prospect of apositive Singularity incredibly exciting – and I find it even moreexciting that it really, possibly could come about in the next ten years. ButitÕs only going to happen quickly if enough of the right people take the rightattitude -- and assume itÕs possible, and push for it as hard as they can.
Remember the story I startedout with – Dantzig and the unsolved problems of statistics.
This is the attitude IÕvetaken with the Novamente design. ItÕs the attitude Aubrey de Grey has taken with his work
We humans are funny creatures.