Can High
Technology and Libertarian Politics
Lead us to a
Transhuman Golden Age?
Ben Goertzel
September 2000
Nietzsche gave his book “Twilight of the Idols” the subtitle “How to philosophize with a hammer.” It was the moral codes and habitual thought patterns of his culture that he was smashing. In a similar vein, the creed of the Extropians, a group of transhumanist futurists centered in California, might be labeled “How to technologize with a hammer.” This group of computer geeks and general high-tech freaks wants to push ahead with every kind of technology as fast as possible – the Internet, body modification, human-computer synthesis, nanotechnology, genetic modification, cryogenics, you name it. Along the way they want to get rid of governments, moral strictures, and eventually humanity itself, remaking the world as a hypereconomic virtual reality system in which money and technology control everything. Their utopian vision is sketchy but fabulous: a kind of Neuromancer-ish, Social-Darwinist Silicon-Valley-to-the-n’th-degree of the collective soul.
Intuitively conceived as the opposite of entropy, Extropy is a philosophical rather than a scientific term. The Extropians website (www.extropy.org), the online Bible of the movement, defines Extropy as “A metaphor referring to attitudes and values shared by those who want to overcome human limits through technology. These values … include a desire to direct oneself in pursuing perpetual progress and self-transformation with an attitude of practical optimism implemented using rational thinking and intelligent technology in an open society.” Extropianism is a form of transhumanism, concerned with the quest for “the continuation and acceleration of the evolution of intelligent life beyond its currently human form and limits by means of science and technology, guided by life-promoting principles and values, while avoiding religion and dogma.” Working toward the obsolescence of the human race through AI and robots is one part of this; another aspect is the transfer of human personalities into “more durable, modifiable, and faster, and more powerful bodies and thinking hardware,” using technologies such as genetic engineering, neural-computer integration and nanotechnology.
Along with this technological vision comes a political vision. Extropians, according to extropy.org, are distinguished by host of sociopolitical principles, such as: “Supporting social orders that foster freedom of speech, freedom of action, and experimentation. Opposing authoritarian social control and favoring the rule of law and decentralization of power. Preferring bargaining over battling, and exchange over compulsion. Openness to improvement rather than a static utopia. … Seeking independent thinking, individual freedom, personal responsibility, self-direction, self-esteem, and respect for others.” It is explicitly stated in Extropian doctrine that there cannot be socialist Extropians, although the various shades of democratic socialism are not explored in detail. In point of fact, the vast majority of Extropians are radical libertarians, advocating the total or near-total abolition of the government. This is really what is unique about the Extropian movement: the fusion of radical technological optimism with libertarian political philosophy. With only slight loss of meaning, one might call it libertarian transhumanism.
Some Extropians carry their anti-socialism to a remarkable extreme. For instance, visionary roboticist Hans Moravec, an Extropian stalwart and hero, had a somewhat disturbing exchange with writer Mark Dery in 1993. Dery asked Moravec about the socioeconomic implications of the robotic technology he envisioned. Moravec replied that “the socioeconomic implications are … largely irrelevant. It doesn’t matter what people do, because they’re going to be left behind like the second stage of a rocket. Unhappy lives, horrible deaths, and failed projects have been part of the history of life on Earth ever since there was life; what really matters in the long run is what’s left over.” Does it matter to us now, he asks, that dinosaurs are extinct? Similarly, the fate of humans will be uninteresting to the superintelligent robots of the future. Humans will be viewed as a failed experiment – and we can already see that some humans, and some human cultures, are worse failures than others.
Dery couldn’t quite swallow this. “I wouldn’t create a homology between failed reptilian strains and those on the lowermost rungs of the socioeconomic ladder.”
Moravec’s reply: “But I would.”
Put this way, Extropianism starts to seem like a dangerous and weird fringe philosophy. But Moravec is not the only well-known name behind the movement. Others include Marvin Minsky the AI guru, Eric Drexler the nanotechnologist, Kevin Kelly of Wired Magazine, and futurist writer Ray Kurzweil. The Extropy magazine had several thousand subscribers before it moved on the Web in 1997; and the Extropy e-mail discussion list is a hugely active one. A vast amount of online literature exists, related to various aspects of Extropian thinking, linked to the extropy.org site. This is definitely one of the more active and vibrant online communities. Whatever its strengths and weaknesses, it’s worth paying some attention to.
The man who got this all started was Max More, a philosophy PhD with a knack for rational argumentation and a remarkably convincing personal style. In 1995, Jim McClellan interviewed More for the UK newspaper Observer and noted, "The funny thing about Max is that while his ideas are wild, he argues them so calmly and rationally you find yourself being drawn in."
More started his career studying philosophy, politics and economics at St. Anne’s College at Oxford University, in the mid-1980’s. At that point his main focus was on economics, from the libertarian perspective. While doing his degree, Max became strongly interested in life extension, and he was the first person in Europe to sign up for cryonic suspension with the US firm Alcor. In 1995, when he received his philosophy degree from the University of Southern California for research on mind, ethics and personal identity, he was already deep into organizing the Extropian movement, bringing his political and technological interests together. Technology, he felt, was ready to push mind into new spaces altogether, such as virtual realities where the notion of “I” as currently conceived had no meaning. Governments were holding us back, preventing or slowing research in crucial areas.
The first edition of Extropy magazine came out in August/September 1988 with just 50 copies, co-edited by Max More and his friend T.O. Morrow. It was a wild mix of sci-fi-meets-reality thinking -- life extension, machine intelligence, and space exploration, to intelligence augmentation, uploading, enhanced reality, alternative social systems, futurist philosophy, and so on. The magazine seeded the social network that led to the e-mail list (1991), the first Extropy conference (1994) and the website (1996), which soon (1997) obsoleted and incorporated the paper magazine.
In terms of philosophical precedents, it’s not too inaccurate to call More’s credo a mix of Ayn Rand-ian anti-statist individualism with Nietzschean transmoralism, held together by a focus on future technologies. In Extropy #10 he explicitly equates the “optimal Extropian” with “Nietzsche’s Ubermensch.” But he cautions, in another essay (“Technological Transformation: Expanding Personal Extropy”), that “the Ubermensch is not the blond beast and plunderer.” Rather, the Extropian Ubermensch “will exude benevolence, emanating its excess of health and self-confidence.” That’s reassuring… yet hard to reconcile with Moravec’s Olympian detachment regarding the destruction of the human race. This contradiction, I believe, is both Extropianism’s core weakness and a primary source of its energy.
In spite of More’s forceful argumentative style, Extropianism is certainly not an orthodoxy. Within the general “party line” of Extropianism, there’s room for a lot of variety. This is one of the movement’s strengths, and surely a necessary aspect of any organization involving so many over-clever, individualistic oddball revolutionaries. Moravec and More don’t agree with each other entirely, and don’t necessarily agree with all their own past opinions. Consensus isn’t critical; progress is the thing.
Moravec is robotics-focused; More is particularly into life extension; on the other hand, Eliezer Yudkowsky, one of the more interesting young Extropians, focuses his thinking on the coming “Singularity,” the point at which artificial intelligence first exceeds human intelligence. Rather than focusing on life extension, nanotechnology, virtual reality, robotics or genetic modification, Yudkoswky reckons the best course to the delirious cyberfuture is to create a computer smarter than us, one that can figure out these other puzzles for us. To accomplish this goal of real computer intelligence, he proposes the notion of “seed AI,” in which one first writes a simple AI program that has a moderate level of intelligence, and the ability to modify its own computer code, to make itself smarter and smarter. His design for a “seed AI” is still evolving, and is being posted on the Web as it emerges. Discussions on his “Singularitarian” e-mail list led to the formation of the Singularity Institute devoted to “seed AI,” and to the company Vastmind.com, developer of a distributed processing framework that allows a collection of computers on the Net to act like a single vast machine.
Yudkowsky, like many of the leading Extropians, started his life as a gifted child; and, like many gifted children, he grew up neglected by the school system and misunderstood by his parents. He’s followed a unique psychological trajectory: After the seventh grade, he was stricken with a peculiar lack of energy, which to some degree plagues him to this day. His parents tried to help him cope with this in various ways, but without success: only when they allowed him to take control of his own life and his own mind was he able to work his way back to a productive and functional state of mind. This experience, he says, taught him that even well-meaning, loving people who want to help you can do you a lot of damage, due to their lack of understanding. He cites this as one of the sources of his libertarian political philosophy. Just as his parents tried to guide his life but failed in spite of good intentions, so does the government try to guide the lives of its citizens, but fails – and fails particularly where the vanguard of technology is concerned.
I’ve had a few intellectual exchanges with Yudkowsky, Marvin Minsky and other Extropian thinkers, but the only Extropian I’ve known well on a personal level was Sasha Chislenko – a visionary cybertheorist and outstanding applied computer scientist.. Sasha’s work, thought and life exemplify the brillance and power and weakness and danger of the Extropian creed in an extremely vivid way.
As with many Russian emigrants to the US, Sasha’s libertarianism was borne of years of oppression under the Soviet Socialist regime. Having seen at first hand how much trouble an authoritarian government can cause, he was convinced that government was intrinsically an almost entirely evil thing. After he left the Soviet Union, Sasha was a man without a country, lacking a Russian passport due to his illegal escape from Russia, and lacking an American passport because of his status as a political refugee. Once Sasha and I were invited to give a lecture together at Los Alamos National Labs in New Mexico, but we were informed that since he lacked a passport, he couldn’t get the security clearance required to enter the lab grounds. We ended up turning down the lecture invitation, both disgusted at the government’s closed-mindedness.
Sasha was impatient for body modification technology to advance beyond the experimental stage – he was truly, personally psyched to become a cyborg, to jack his brain into the Net, to replace his feeble body and brain with superior engineered components. Not that there was anything particularly wrong with his body and brain -- it just wasn’t as good as the best synthetic model he could envision. He was a strong advocate of various “smart drugs,” some legal, some not, which he felt gave him a superhuman clarity of thought. He was outraged that any government would consider it had the right to regulate the chemicals he chose to put into his body to enhance his intelligence.
His own technical work focused on “active collaborative filtering,” technology that allows people to rate and review things they see on the Net, and then recommends things to people based on their past ratings and the ratings of other similar people. Popular websites like amazon.com and bn.com have collaborative filtering systems embedded in them – when you log on to buy a book, they give you a list of books you might be interested in. Sometimes these systems work, sometimes they don’t. Recently I logged onto Amazon to buy a “Bananas in Pyjamas” movie for my young daughter, and their recommendation system suggested that I might also be interested in the movie “Texas Chainsaw Massacre II.” How it came up with that recommendation I’m not sure, though I can guess: Perhaps the only previous person to by “Bananas in Pyjamas” had also bought the Texas Chainsaw Massacre film. The recommendation systems that Sasha designed were far more sophisticated than this one, probably the most advanced in the world. He led a team implementing some of his designs at Firefly, a company later acquired by Microsoft.
Compared to body modification, cranial jacks and superhuman artificial intelligence, active collaborative filtering might seem a somewhat unexciting path to the hypertechnological future, but to Sasha, it was a tremendously thrilling thing – a way for humans to come together and enhance one another’s mental effectiveness, passing along what they’d learned to one another in the form of ratings, reviews and recommendations. Recommendation and filtering technology was a kind of collective smart drug for the net-surfing human race.
Sasha’s vision in this area is epitomized by a Website like epinions.com, which pays users to give their reviews of consumer products and other things. The higher that others rate your reviews, the more you get paid. He strongly felt that, as the economy transformed into a cyber-powered hypereconomy, intellectual contributions like his own would finally get the economic respect they’d always deserved. People would be paid for writing scientific papers to the extent that other scientists appreciated the papers. The greater good would be achieved, not by the edicts of an authoritarian government, but by the self-organizing effects of people rating each others productions, and paying each other for their ratings and opinions. He coined the word “hypereconomics” to refer to the complex dynamics of an economy in which artificial agents dispense small payments for everything, and in which complex financial instruments emerge even from simple everyday transactions – AI agents paying other agents for advice about where to get advice; your shopping agent buying you not just lettuce but futures and options on lettuce, and maybe even futures and options on advice from other agents.
But there was a painful contradiction lurking here, not far beneath the surface. And this personal contradiction, I believe, cuts right at the heart of Extropian philosophy. The libertarian strain in Sasha’s thinking was highly pronounced: Once he told me, tongue only halfway in cheek, that he thought air should be metered out for a price, and that those who didn’t have the money to pay for their air should be left to suffocate! I later learned this was a variation on a standard libertarian argument, often repeated by Max More, to the effect that the reason the air was polluted was that nobody owned it – ergo, air, like everything else, should be private property.
Sasha equated wealth with fundamental value, and his vision of the cyberfuture was one of a complex hypereconomic network, a large mass of money buzzing around in small bits, inducing people and AI agents to interact in complex ways according to their various personal greed motives. But he was by no means personally wealthy, and this fact was highly disturbing to him. He often felt that he was being shafted, that the world owed him more financial compensation for his brilliant ideas, that the companies he’d worked for had taken his ideas and made millions of dollars from them, of which he’d seen only a small percentage in the form of salary and stock options.
When Sasha committed suicide in mid-2000, I wondered whether it had been an act of philosophical despair. Had there been a problem at the company where he was CTO – were they unwilling to implement his latest designs for online collaborative filtering? Had he received one more devastating piece of evidence that the world just wasn’t going to compensate him appropriately for his ideas, that the hypereconomic cyberfuture was far too slow in arriving? As it turned out, his terrible action was more directly motivated by a complicated and painful romantic relationship – good old-fashioned, low-tech human passionate distress.
In some important ways, Sasha was similar to Nietzsche, who as we’ve seen was one of the Extropians’ philosophical godfathers. Both Sasha and Nietzsche were intellectual superstars who explicitly enounced one moral philosophy, but lived another. Nietzsche preached toughness and hardness, but in his life he was a sweet person, respectful of the feelings of his mother and sister (whose beliefs he despised). On the day he went mad, he was observed hugging a horse in the street, sympathetic that its master had whipped it. He preached the merits of the robust, healthy man of action and criticized intellectual ascetics, yet he himself was sickly, nearly celibate, and sat in his room thinking and writing day in and day out. Similarly, Sasha extolled the money theory of value, yet lived his own life seeking truth and beauty rather than cash, trying to transform the world for the better and distributing his ideas for free online. He argued that air should be metered out only to those who could pay for it, yet was unfailingly kind and generous in real life, always willing to help young intellectuals along their way without asking for anything in return.
For what it’s worth, it’s impossible to avoid observing in this context that Sasha, the would-be-cyborg transhumanist, manifested a remarkable number of cyborg-like personality traits. His body movements were sometimes peculiarly robotic – he looked most natural when dancing to techno music, one of his favorite hobbies. It would be an unfair exaggeration to say that his voice had something of the manner of a speech synthesizer – but it did have a peculiar stiffness to it, that one might describe as wooden or metallic. Of course, I don’t want to make this point too strongly: Sasha was an outgoing, friendly human being, easily hurt and in some circumstances quick to anger; he was by no means devoid of affect. But when, about six months before his death, a group of us were coming up with silly e-mail nicknames for our co-workers (Sasha was among them at the time), the one we picked for Sasha was robotron@ …. It was clear to everyone who knew him that he had difficulties dealing with the ambiguities and subtleties of human attitudes and relationships. He acknowledged this himself, and sometimes said it was something he was working on. He was a poor politician, which is partly why he so often got himself into positions where his ideas weren’t adequately appreciated by his co-workers or employers. Extropianism, a clear-cut, simple philosophy, seemed to provide him a welcome respite from the human complexities and contradictions that caused him so much grief in ordinary life.
Of course, not all Extropians have Sasha’s personality characteristics. It would be a mistake to overgeneralize, to create a psychology of Extropians from this one example. Max More, for example, is extremely politically adept in his own way. Many Extropians have above average mastery of human relations, happy personal lives, and so forth. But still, it’s attractive to hypothesize that the role Extropianism played for Sasha – providing crisp certainties to serve as welcome relief from the puzzling, stressful confusion of everyday life – tells us something about the general function of the Extropian belief system.
Extropianism provides its adherents with a simple, optimistic world-view, and a community of like-minded believers. Like most religions, and other religion-like belief systems like Marxism, it avoids the difficult ambiguities of human reality by making extreme proclamations. Of course, Extropianism is explicitly anti-religious, but it’s not a new observation that rabid anti-religiosity is almost a religion itself. As Dostoevsky said, the atheist is one step away from the devout. Atheism and theism provide the mind with the same kind of rigid certainty. For some people, this kind of definitive cutting-through of the Gordian knots of messy human reality can be indispensable, providing the comfort level prerequisite to a healthy and productive state of mind.
As Max More realized from the start, the moral-philosophy aspects of Extropianism are key. Like Nietzsche, the Extropians have recognized that morals are biologically and culturally relative, rather than absolute. Who hasn’t been struck by this at one time or another? We consider it OK to eat animals but not humans; Hindus consider it immoral to eat cows; Maori and other tribes until quite recently considered it OK to eat people. Or, consider sexual morals. Why are female sexual infidelity and promiscuity considered “worse” than similar behaviors on the part of males? This is common to all human cultures; it comes straight from the evolutionary needs of our selfish DNA.
Given this blatant arbitrariness, it’s very attractive to ignore human values altogether, and focus one’s attention on knowledge, understanding and power -- qualities which seem to have an absolute meaning that morality lacks. In this vein, Nietzsche focused on personal power achieved through mental exploration and self-discipline, whereas the Extropians focus on power achieved through technological advancement. They also share a focus on intellectual brilliance, and a dismissive, dangerous attitude toward those who don’t have what it takes to make the next step on the cosmic evolutionary path, as revealed in the Moravec quote above.
Sure, Moravec was playing Devil’s Advocate in that interview. As a counterbalance there’s More’s sometime vision of a benevolent Ubermensch. But the benevolence aspect seems oddly understated … and this understatement of supposed “trickle-down” effects is an aspect of Extropian thought that comes straight out of libertarianism. Yes, it’s a plausible claim that the absence of government is the best way to help the disadvantaged. That, if we focus all our resources on unfettered hi-tech development, the wealth will trickle down to everyone else, Margaret-Thatcher and Ronald-Reagan style. I personally think this is wrong, but it’s a plausible argument, both in the contemporary scene and in the cyberfuture prophesied by the Extropians. But the lack of attention paid to this supposed “trickling-down” phenomenon in Extropian and general libertarian literature makes me wonder how seriously anyone takes this aspect of these philosophies. Opinions like the one expressed by Moravec above make me wonder even more.
Ultimately, something in me rebels against the extremity of Extropian-style political and ethical philosophy. Perhaps it’s just my biological heritage, but I can’t shake the idea that there’s a core of ethical truth going beyond the cultural and biological relativity of moral codes? I posed this question once on the Extropians e-mail list, 4 or 5 years ago. I posited that compassion, simple compassion, was an ethical universal, although it might manifest itself in different ways in different cultures and different species. I suggested that compassion, in which one mind extends beyond itself to feel the feelings of others and act for the good of others without requiring anything in return, was essential to the evolution of the complex self-organizing systems we call cultures and societies. Basically, I expressed my disbelief that all human interaction is, or should be, economic in nature.
The deep intellectual and ethical discussion that I was awaiting – well, no such luck. There was a bit of flaming, some impassioned Ayn Rand-ish refutations, and then they went back to whatever else they’d been talking about, unfazed by my heretical position that perhaps transhumanism and humanism could be compatible, that technological optimism wasn’t logically and irrefutably married to libertarian politics. At that time, you could only belong to their e-mail list for free for 30 days; after that you had to pay an annual subscription fee. After my 30 days expired, I chose not to pay the fee, bemused that this was the only e-mail list I knew of that charged members money, but impressed by their philosophical consistency in this matter.
My final impression of the Extropians? I admire their courage in going against conventional ways of thinking, in recognizing that the human race is not the end-all of cosmic evolution, and in foreseeing that many of the moral and legal restrictions of contemporary society are going to be mutated, lifted or transcended as technology and culture grow. Like them, I’m outraged and irritated when governments stop us from experimenting with our minds and bodies using new technologies -- chemical or electronic or whatever. I find their writings vastly more fascinating than most things I read. They’re looking far toward the future, exploring regions of concept-space that would otherwise remain unknown, and in doing so they may well end up pushing the development of technology and society for the better. But yet, I’m a bit vexed by their vision of themselves as supertechnological proto-Ubermensches, presiding over the inevitable obsolescence of humanity. It’s simultaneously attractive, amusing and disturbing.
Nietzsche, like Sasha, was generally an exemplary human being in spite of the inhuman aspects of his philosophy. Yet many years after his death, Nietzsche’s work played a role in atrocities, just as he’d bitterly yet resignedly foreseen. In the back of my mind is a vision of a far-future hyper-technological Holocaust, in which cyborg despots dispense air at fifty dollars per cubic meter, citing turn-of-the-millenium Extropian writings to the effect that humans are going to go obsolete anyway, so it doesn’t make much difference whether we kill them off now or not. And so, I think Extropians should be read, because they’ve thought about some aspects of our future more thoroughly than just about anyone else. But I also think that the key idea that makes their group unique -- the alliance of transhuman technology with simplistic, uncompassionate libertarian philosophy – must be opposed with great vigor.
No philosophy can do justice to the full richness of human experience – philosophies are abstractions, and the role of abstractions is not to replace the specifics from which they emerge, but rather to guide the development of these specifics. But some philosophies capture more human richness than others, and it seems to me that Extropianism ranks fairly low on this scale. Extropian philosophy is a sufficiently inaccurate abstraction of the course of human and technological progress that it can’t be trusted as a guide for the evolution of the cyberworld. And in this light, the “fringe” nature of the Extropian group seems quite fortunate. There seems to be very little risk of the ideas of this cabal of Californian would-be supermen dominating our future. All signs indicate that they will continue to contribute to our collective conversation on techno-cultural evolution, but are very unlikely to acquire the power to impose their ideas on the rest of us. And this is a very good thing.
Many of the freedoms the Extropians seek – the legal freedom to make and take smart drugs, to modify the body and the genome with advanced technology – will probably come soon (though not soon enough for me, or them). But I hope that these freedoms will not come along with a cavalier disregard for those living in less fortunate economic conditions, who may not be able to afford the latest in phosphorescent terabit cranial jacks or quantum-computing-powered virtual reality doodaddles, or even an adequately nutritional diet for their children. I believe that we humans, for all our greed and weakness, have a compassionate core, and I hope and expect that this aspect of our humanity will carry over into the digital age – even into the transhuman age, outliving the human body in its present form. I love the human warmth and teeming mental diversity of important thinkers like Max More, Hans Moravec, Eliezer Yudkowsky and Sasha Chislenko, and great thinkers like Nietzsche – and I hope and expect that these qualities will outlast the more simplistic, ambiguity-fearing aspects of their philosophies. Well aware of the typically human contradictoriness that this entails, I’m looking forward to the development of a cyberphilosophy beyond Extropianism -- a humanist transhumanism.