An Archaeological Dig Through The Extropian Archives
Whether you have heard of the Extropians or not, they have influenced you.
A group of a few dozen prolific posters are the seeds of the online intellectual communities around AI, existential risk, accelerationism, effective altruism, rationalism, libertarianism, and cryptocurrency. Prominent members such as Nick Bostrom, Robin Hanson, and Eliezer Yudkowsky formulated the central ideas that they would pull through their entire career on the Extropian forum in the late 90s.
The Extropians are influential but sometimes profligate writers. I read over five thousand of their mailing list posts (which still only covered 1996-97) and hundreds of other pieces and distilled them down to the influential and interesting highlights.
I will link to the original archived version of each post. If you want to see more context around a post, click “Messages sorted by: [subject]” at the top of the screen and you should jump to a list of all posts who reference the same subject in their title. Note that these are not always ordered correctly. Sorting by [thread] organizes posts by reply structure but it often loses parts of the thread.
This is an annotated tour through the primary sources
Section 1: What Is Extropianism?
Meet The Extropians
- Ed Regis, WIRED, 1994
This article is a historical gem not only for its lurid descriptions of Extropian parties and its impact in popularizing the movement, but also as an example of the culture at WIRED magazine. WIRED was not a mainstream publication at the time, but the unbridled positivity in this article’s description of such a opinionated and non-conformist group still feels strange to find outside of a blog. Ed Regis begins his article by exulting the Extropians for their practiced belief in human potential.
THERE'S BEEN NOTHING like this movement - nothing this wild and extravagant - since way back in those bygone ages when people believed in things like progress, knowledge, and - let's all shout it out, now - Growth!
The Handshake: Right hand out in front of you, fingers spread and pointing at the sky. Grasp the other person's right hand, intertwine fingers, and close. Then shoot both hands upward, straight up, all the way up, letting go at the top, whooping "Yo!" or "Hey!" or some such thing.
You won't be able to do this without smiling, without laughing out loud, in fact - just try it - but this little ceremony, this tiny two-second ritual, pretty much sums up the general Extropian approach. This is a philosophy of boundless expansion, of upward- and outwardness, of fantastic superabundance.
It's a doctrine of self-transformation, of extremely advanced technology, and of dedicated, immovable optimism. Most of all, it's a philosophy of freedom from limitations of any kind. There hasn't been anything like it - nothing this wild and extravagant, no such overweening confidence in the human prospect - since way back to those bygone ages when people still believed in things like progress, knowledge, and - let's all shout it out, now - Growth!
He goes on to describe the techno-libertarian-SoCal party scene.
Romana Machado - aka "Mistress Romana" - software engineer, author, and hot-blooded capitalist, showed up dressed as the State, in a black vinyl bustier and mini, with a chain harness top, custom-made for her at Leather Masters in San Jose, California, for whom she does modeling work. She was in all that garb, carrying a light riding crop, plus a leash, at the other end of which, finally, her Extropian companion Geoff Dale, the Taxpayer, crawled along in mock subjection. The couple embodied Extropian symbolism, the State being regarded as one of the major restrictive forces in the Milky Way galaxy. These people hate government, particularly "entropic deathworkers like the Clinton administration."
This “hot-blooded capitalism” mixes with cryo-preservation and brain uploading.
Mike Perry, overseer of the 27 frozen people (actually, 17 are frozen heads, only 10 are entire bodies) submerged in liquid nitrogen at minus 321 degrees Fahrenheit (Cold enough for you?) at the Alcor Life Extension Foundation, a cryonics outfit in Scottsdale, Arizona, gave a talk saying that, contrary to appearances, genuine immortality was physically possible.
…
The way to get around [death] in the future, said Perry, would be to download the entire contents of your mind into a computer - your memories, knowledge, your whole personality (which is, after all, just information) - you'd transfer all of it to a computer, make backup copies, and stockpile those copies all over creation.
In the space of a few paragraphs we are introduced to landmark cultural and scientific contributions to accelerationism, sea-steading, cryptography, and bay area co-living. Then we meet illustrious early members of the Extropian community: Eric Drexler and Marvin Minsky
Along the way there was an attempt to create a nomenclature that lived up to Extropian doctrine ... There was the instantly-memorable disasturbation (another Dave Krieger invention), "idly fantasizing about possible catastrophes (ecological collapse, full-blown totalitarianism) without considering their likelihood or considering their possible solutions/preventions."
Further along there was a concerted attempt to flesh out the Extropian dream. Tom Morrow, the Extropian legal theorist ... wrote articles about "Free Oceana," a proposed community of Extropians living on artificial islands floating around on the high seas.
Operated by Romana Machado, the aforementioned "Mistress Romana" who in real life works in the Newton division of Apple Computer (she's also the inventor of Stego, a program that compliments traditional encryption schemes - see "Security Through Obscurity," Wired 2.03, page 29), Nextropia is an Extropian boarding house, a community of friends. Just don't call it a "commune."
Still, for all their journals, newsletters, e-mail lists, and other forms of obsessive communication, it cannot be said that the Extropians are taking the world by storm ... But what the Extropians lack in numbers they make up for in sheer brains; at various times people like artificial intelligence theorist Marvin Minsky, nanotechnologist Eric Drexler, and USC professor Bart Kosko (of fuzzy logic fame) have been found lurking on extropians@extropy.org.
"I agree with most of the Extropian ideas," [Drexler] said later. "Overall, it's a forward-looking, adventurous group that is thinking about important issues of technology and human life and trying to be ethical about it. That's a good thing, and shockingly rare."
There is lots more at the link, but this should give you a sense of early Extropian culture and ideology. This article further accelerated the growth of Extropianism. The group would never be very large, but dozens of its members would seed growing communities of their own, extending the Extropian's reach to millions of people.
The Extropian Principles
- Max More, 1996
Max More is the philosophical founder of Extropianism. He changed his name from Max O'Connor to Max More to reflect his belief in human potential. Still active on Substack today, he has been a central member of the Extropian community for nearly 40 years. This post from 1996 is version 2.6 of his central principles. It is the fourth post in the archives making it one of the oldest surviving artifacts from the forum as the first several years of extropians@extropy.org have been lost.
It seems a good time to post the Extropian Principles. Many newer subscribers may never have read them. They explain which attitudes those calling themselves "Extropians" hold in common to various degrees, and so explain the range of topics discussed on this list.
EXTROPY: A measure of intelligence, information, energy, vitality, experience, diversity, opportunity, and growth.
EXTROPIANISM: The philosophy that seeks to increase extropy.
Extropianism is a transhumanist philosophy: Like humanism, transhumanism values reason and humanity and sees no grounds for belief in unknowable, supernatural forces externally controlling our destiny, but goes further in urging us to push beyond the merely human stage of evolution. As physicist Freeman Dyson has said: "Humanity looks to me like a magnificent beginning but not the final word." Religions traditionally have provided a sense of meaning and purpose in life, but have also suppressed intelligence and stifled progress. The Extropian philosophy provides an inspiring and uplifting meaning and direction to our lives, while remaining flexible and firmly founded in science, reason, and the boundless search for improvement.
1. Boundless Expansion: Seeking more intelligence, wisdom, and effectiveness, an unlimited lifespan, and the removal of political, cultural, biological, and psychological limits to self-actualization and self-realization. Perpetually overcoming constraints on our progress and possibilities. Expanding into the universe and advancing without end.
2. Self-Transformation: Affirming continual moral, intellectual, and physical self-improvement, through reason and critical thinking, personal responsibility, and experimentation. Seeking biological and neurological augmentation.
3. Dynamic Optimism: Fueling dynamic action with positive expectations. Adopting a rational, action-based optimism, shunning both blind faith and stagnant pessimism.
4. Intelligent Technology: Applying science and technology creatively to transcend "natural" limits imposed by our biological heritage, culture, and environment.
5. Spontaneous Order: Supporting decentralized, voluntaristic social coordination processes. Fostering tolerance, diversity, long-term thinking, personal responsibility, and individual liberty.
This entire tract reads exactly like an e/acc manifesto would in 2023.
We oppose apocalyptic environmentalism which hallucinates catastrophe, issues a stream of irresponsible doomsday predictions, and attempts to strangle our continued evolution.
Living vigorously, effectively, and joyfully, requires dismissing gloom, defeatism, and ingrained cultural negativism. Problems technical, social, psychological, ecological, are to be acknowledged but not allowed to dominate our thinking and our direction. We respond to gloom and defeatism by exploring and exploiting new possibilities. Extropians hold an optimistic view of the future.
Max More himself continues to be an anti-doomer accelerationist but many of the world's most prominent doomers today, including a young Eliezer Yudkowsky, were once squarely within this libertarian-techno-optimist philosophy. Whether this lends more or less credence to their views is up to you to decide. In true Extropian fashion, More's principles were not met with universal agreement. Several long threads were created debating various parts of his list of principles.
A Transhuman Fairytale
- Anders Sandberg, 1997
ExI: Proto-Survey
- Eliezer Yudkowsky, 1997
These posts are shorter and less influential than the pieces above, but they are still interesting cultural excerpts. Ander's fairytale extolls the virtues of life extension, genetic engineering , and cryo-preservation. It shares many themes with and was likely an inspiration for Nick Bostroms "Fable of The Dragon Tyrant."
Eliezer's post gives an interesting sense of what subgroups and interests Extropians see in their own community.
Rank these tenets of extropianism in order of personal time-spent-thinking-about:
A) Nanotechnology B) Singularity C) Libertarianism D) Space travel E) Intelligence amplification F) Internet G) Cryonics H) Artificial intelligence I) Postbiological bodies J) Uploading
Note that you do not rank the items in each column separately, and the ranking consists of a simple prioritization. Leave off any items you never think about, or think are unnecessary. My results are EBAHJFC, or IA-SI-Nano-AI-Upload-Net-Libertarian.
Not agreeing with Eliezer's categorization, Eric Watt Forste responds:
I find it curious that "moral philosophy" is not on your list. You also don't mention neuroscience. Nor psychology, for that matter. Nor memetics. Hmm. Since most of the things that I spend most of my time thinking about lately are not even accounted for in your pigeon-holed caricature of extropian thinking, I think I'll skip your poll, okay?
Section 2: Cryptography
Protecting Privacy With Digital Cash
- Hal Finney, 1993
Hal Finney is one of the leading candidates for the true name behind Satoshi Nakamoto's founding of Bitcoin and a prominent extropian (the other two, also Extropians, are Wei Dai and Nick Szabo). This 1993 article in issue 10 of Extropy Magazine is Hal explaining how public key cryptography can help further Extropian values by allowing anonymous messaging and transaction.
How can we defend our privacy in an era of increased computerization? Today, our lives are subject to monitoring in a host of different ways. Every credit card transaction goes into a database. Our phone calls are logged by the phone company and used for its own marketing purposes. Our checks are photocopied and archives by the banks. And new “matching” techniques combine information from multiple databases, revealing ever more detail about our lives.
…
While most people concerned with this problem have looked ot paternalistic government solutions, [David Chaum] has been quitely putting together the technical basis for a new way of organizing our financial and personal information … these solutions rely on the ancient science devoted to keeping information confidential: cryptography.
Hal goes on to explain how public key cryptography works (quite well I might add, well worth reading as an introduction) and how it can be used to send anonymous messages with return addresses and digital signatures. The "Digital Cash" in the title is not a traditional cryptocurrency, but rather it is David Chaum's attempt to create an online version of tradeable bank notes which preserve the identity of the payer (but not the payee).
In his article on public key cryptography, Hal Finney mentions a new easy way for interested Extropians to implement anonymous messaging: PGP or Pretty Good Privacy. This encryption scheme from computer scientist Phil Zimmerman would soon become embroiled in a legal battle as the Clinton administration attempted to prevent the export and use of encryption in general and PGP in particular. The tech-savy-libertarian Extropians had found their mortal enemy in the anti-encryption "entropic deathworkers" of the Clinton administration and they followed these cases with great interest.
This post follows a case against Professor Dan Bernstein who sued the government over the constitutionality of export controls on encryption enforced through the Arms Export Controls Act.
The plaintiff in the case, Daniel J. Bernstein, Research Assistant Professor at the University of Illinois at Chicago, developed an "encryption algorithm" (a recipe or set of instructions) that he wanted to publish in printed journals as well as on the Internet. Bernstein sued the government, claiming that the government's requirements that he register as an arms Dealer and seek government permission before publication was a violation of his First Amendment right of free speech. This was required by the Arms Export Control Act and its implementing regulations, the International Traffic in Arms Regulations.
He won an initial case in district court, but the Clinton administration just moved the same regulations from the State department to the Commerce department. So Bernstein sued again.
The government argued that since Bernstein's ideas were expressed, in part, in computer language (source code), they were not protected by the First Amendment. On April 15, 1996, Judge Patel rejected that argument and held for the first time that computer source code is protected speech for purposes of the First Amendment. On December 6, Judge Patel ruled that the Arms Export Control Act is a prior restraint on speech, because it requires Bernstein to apply for and obtain from the government a license to publish his ideas. Using the Pentagon Papers case as precedent, she ruled that the government's "interest of national security alone does not justify a prior restraint."
Fwd: Professor Asks for Constitutional Review of New Encryption Regulations
- Enigl@aol.com, 1996
b-money
-Wei Dai, 1998
RPOW
-Hal Finney, 2004
BitGold
-Nick Szabo,2005
All three of these links contain proposals for fully anonymous and decentralized money systems. None of the systems were implemented perfectly or were very popular, but they all served as essential examples and inspiration for the cryptocurrency systems we have today. All of their authors were active Extropians.
b-money, the oldest proposal, introduces the idea of the public leger of transactions being upkept by a subset of network participants called "servers" (analogous to modern day miners)
In the second protocol, the accounts of who has how much money are kept by a subset of the participants (called servers from now on) instead of everyone. These servers are linked by a Usenet-style broadcast channel. The format of transaction messages broadcasted on this channel remain the same as in the first protocol, but the affected participants of each transaction should verify that the message has been received and successfully processed by a randomly selected subset of the servers.
Since the servers must be trusted to a degree, some mechanism is needed to keep them honest. Each server is required to deposit a certain amount of money in a special account to be used as potential fines or rewards for proof of misconduct. Also, each server must periodically publish and commit to its current money creation and money ownership databases. Each participant should verify that his own account balances are correct and that the sum of the account balances is not greater than the total amount of money created. This prevents the servers, even in total collusion, from permanently and costlessly expanding the money supply. New servers can also use the published databases to synchronize with existing servers.
Hal Finney's Reusable Proof of Work (RPOW) is the fundamental insight behind the "chain" of all modern blockchains. Early proof of work systems, in an effort to avoid the double-spend problem, produced tokens that could only be used once. But these one-time-use tokens made exchange clunky and confusing and kept the compute requirements low which left the door open to some spam. Hal's RPOW solves this problem by allowing recipients to exchange a used tokens with a public server for reusable ones, while still avoiding double-spends.
Reusable proof of work (RPOW) tokens extend on hashcash to provide a limited form of reuse. As explained above, allowing hashcash to be freely reused would make the tokens effectively worthless, as there would be no limits to how many times a given token could be shown. RPOW tokens provide a limited form of reuse called sequential reuse. In essence, once a POW token is created in the form of a piece of hashcash, it can be exchanged at an RPOW server for an RPOW token of equal value. The RPOW token can then be exchanged, sent, or otherwise used similarly to a hashcash token. However, rather than being effectively discarded after use, the RPOW token can be exchanged by the recipient at the RPOW server for a new, equal-value, RPOW token. This token can be used exactly like the first one. It can be sent to a recipient, who can verify and exchange it at an RPOW server, just as they might do for a POW token. And that new recipient can, after exchanging the RPOW token at the server and receiving a new one in exchange, retain the new RPOW token and use it again in the future.
In this way, a single POW token is the foundation for a chain of RPOW tokens. The effect is the same as if the POW token could be handed from person to person and retain its value at each step.
Nick Szabo's BitGold is a synthesis of all of these ideas into something that we would recognize as a full cryptocurrency today.
Here are the main steps of the bit gold system that I envision:
1. A public string of bits, the "challenge string," is created (see step 5).
2. Alice on her computer generates the proof of work string from the challenge bits using a benchmark function.
3. The proof of work is securely timestamped. This should work in a distributed fashion, with several different timestamp services so that no particular timestamp service need be substantially relied on.
4. Alice adds the challenge string and the timestamped proof of work string to a distributed property title registry for bit gold. Here, too, no single server is substantially relied on to properly operate the registry.
5. The last-created string of bit gold provides the challenge bits for the next-created string.
6. To verify that Alice is the owner of a particular string of bit gold, Bob checks the unforgeable chain of title in the bit gold title registry.
7. To assay the value of a string of bit gold, Bob checks and verifies the challenge bits, the proof of work string, and the timestamp.
Section 3: Existential Risk
The Great Filter
- Robin Hanson, 1996
Among the likely readers of this page, existential risk is one of the most important intellectual topics. Although discussions of existential risk go back at least to the invention of nuclear weapons, the modern characters and arguments can be traced back to the Extropian mailing list. The extreme techno-optimism of Extropianism immediately leads to questions about risk. If we are so confident that humans will one day spread throughout the endless universe with swarms of Von Neumann probes, then where are all the other great galactic civilizations? And if we keep increasing the power of Man through technology then what happens when one crazy person can destroy the world on a whim?
In his piece on The Great Filter Robin Hanson analyzes the first question.
Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begetting such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?
…
Within the next million years (at most) therefore, our descendants seem to have a foreseeable (greater than one in a thousand) chance of reaching an "explosive" point, where they expand outward at near the speed of light to colonize our galaxy, and then the universe, easily overpowering any less developed life in the way. FTL (faster than light) travel would imply even faster expansion.
We expect such an explosion to fill most every available niche containing usable mass or negentroy resources. And even if the most valuable resources are between the stars or at galactic centers, we expect some of our descendants to make use of most all the matter and energy resources they can economically reach, including those in "backwater" solar systems like ours and those near us.
…
Our planet and solar system, however, don't look substantially colonized by advanced competitive life from the stars, and neither does anything else we see. To the contrary, we have had great success at explaining the behavior of our planet and solar system, nearby stars, our galaxy, and even other galaxies, via simple "dead" physical processes, rather than the complex purposeful processes of advanced life. Given how similar our galaxy looks to nearby galaxies, it would even be hard to see how our whole galaxy could be a "nature preserve" among substantially-restructured galaxies.
These considerations strongly suggest that no civilization in our past universe has reached such an "explosive" point, to become the source of a light speed expansion of thorough colonization. (That is, no civilization within the past light cone of a million years ago for us). Much follows from this one important data point.
Consider our best-guess evolutionary path to an explosion which leads to visible colonization of most of the visible universe:
1. The right star system (including organics)
2. Reproductive something (e.g. RNA)
3. Simple (prokaryotic) single-cell life
4. Complex (archaeatic & eukaryotic) single-cell life
5. Sexual reproduction
6. Multi-cell life
7. Tool-using animals with big brains
8. Where we are now
9. Colonization explosion(This list of steps is not intended to be complete.) The Great Silence implies that one or more of these steps are very improbable; there is a "Great Filter" along the path between simple dead stuff and explosive life.
…
Rational optimism regarding our future, then, is only possible to the extent we can find prior evolutionary steps which are plausibly more improbable than they look. Conversely, without such findings we must consider the possibility that we have yet to pass through a substantial part of the Great Filter. If so, then our prospects are bleak, but knowing this fact may at least help us improve our chances.
Anyone familiar with Robin's later work on Grabby Aliens will recognize many similarities. This post is the beginning of a line of work that Robin would pull through the next 30 years of his career.
Re: The great filter
- Nick Bostrom, 1996
Robin's piece spawned several dozen posts of discussion and praise on the Extropian forum. The entire thread is worth reading but the discussion between Nick Bostrom and Robin Hanson is especially interesting because this conversation surely influenced Bostrom's later thinking on existential risk which underpins much of the modern discussion.
I recommend every transhumanist to read Robin Hanson’s short document. It gives a clear presentation of an argument that should be taken very seriously.
…
If life developed independently on Earth and on Mars, what could block the conclusion that our far future is probably doomed? I can see only three potential answers:
1. As you suggest, there could be a great filter at some later stage in the evolution of high intelligence.
2. Higher life forms continue to prosper but do not cause an "explosion" into cosmos. (I think that Michael Wiik favoured this alternative.)
3. Higher life forms do explode into cosmos, but in ways that are invisible to us. This would presumably mean that they do not engage in galactic scale constructions, and that they are not interested in contacting human level life.
…
Then, of course, there is the other possibility: that intelligent life almost always put an end to its own existence once it has reached a certain level of sophistication. It would be interesting to list the ways in which this could happen.
POLITICS: Avoiding nuclear anarchy
- Nick Bostrom, 1997
Nick Bostrom also incubated ideas on the Extropian forum that still dominate his thinking and the modern discourse on existential risk. This post is a clear prototype of his 2019 paper The Vulnerable World Hypothesis and of the longtermist focus on existential risk in general.
In the recent book "Avoiding nuclear anarchy", a group of
Harvard researches argues that the danger of nuclear
prolifieration is *the* greatest threat to american national
security. Especially worrisome is the risk of leakage of
weapon grade plutonium and uranium, as well as lower grade
fuel, nuclear related technology and nuclear scientists from
the former Sovjet Union. The authors claim that though the
american government has made some attempts to cooperate with the Russians to contain this threat, the efforts have not been proportionate to the urgency of the situation.
Serious as this may be though, the risk of nuclear leakage
from the ex-Sovjet is only a minor part of the picture of
dangers that lay ahead of us. There are other sources for
nuclear proliferation, there is the risk of proliferation of
biological and chemical weapons, and there are even greater
risks in some potential new technology such as
nanotechnology.
None of this is news, but I bring it up for two reasons. The
firt reason is that I think that we extropians and
transhumanists do not consider the risks seriously enough.
We usually acknowledge their existence, but are not
generally too worried about them and we do not spend a great portion of our time thinking about what we could do about them. The explanation is probably that it is *boring*. No
one wants to be the one who complaints, urges restraint and
focuses on the negative aspects of things. This is an
explanation but not an excuse; it would be great if we could
become more responsible.
Another explanation -and this brings me to the second point
I want to make- is that many of these problems seem to call
for solutions that clash with the libertarian doctrine held
by many on this list. At least at first sight, it seems that
a reformed and strengthened United Nation would be the best
suited institution to supervise the use of very dangerous
technologies. I know the mere mention of the UN probably
causes some people here to want to throw up. The alternative
would perhaps be a world where USA plays the international
big brother role, but this raises many problems of moral
authority, financing etc. etc. Perhaps it is therefore worth
considering an institution like the UN in whose decision
making processes USA and other nations are allowed to play a part in the that are in some proportion to their real power.
If this were to be effective, the UN would of course have to
have the mandate to make and enforce some laws that would prevent any nations, organisations and individuals from
acquiring weapons of mass distruction or any technologies
with dangers greater than the group in question could be
entrusted to handle.
This would mean that some sacrifice of individual freedom
and national souveigninty would have to be made. But that
would be a prise worth paying if it would increase the
chance that mankind succeeds in avoiding putting an end to
civilization or even its existence as a species. Since we
have the vision of how fantastic the world could be if the
forces of possible technologies were released and employed
wisely, we should also have the will to do what it takes to
prevent things from going tragically wrong, even if this
means adding new principles to our thinking that
circumscribe the applicability of some of our most favoured
dogmas.
Do you think there is something in this?
Bostrom extends this idea to the familiar Unilateralist Curse in a response later in the thread
Let me ask you a question. If there were a little machine
which one could have in one's pocket, and which could cause
you and me and all our friends to die immediately, would you
prefer that only one or two persons got hold of such
gadgets, or would you feel safer if 50 000 individuals and
organisations had them?
As long as weapons are of relatively limited destructive
capacity, it may be debatable whether concentration or
defusion would make for a safer world. But when the weapons
become so advanced that they can bestow upon a single agent the power to cause immense damage, possibly even to
terminate the human species, then we have a different
situation.
Section 4: AI
Thinking about the future...
- David Musick, 1996
Note that this post was not correctly included in the threads which followed so to see the replies click “Messages sorted by [subject]” for the list.
Probably the most common topic of conversation on the Extropian mailing list was AI, superintelligence, and the singularity. It is the obvious end point for anyone interested in extrapolating trends in computing power decades into the future. Many of these discussions were characteristically pollyannish. But there was enough discussion that even the techno-optimist Extropians covered tons of the standard AI safety arguments and their responses and the responses to the responses. It seemed to me, reading through the archives, that every AI safety argument I've seen on Twitter already happened on the Extropian mailing list in 1996.
This post from David Musick is certainly not the first to consider AI risks, although it is one of the first posts on this topic which survived in the archives. But it is interesting because of the dissonance he has between the Extropian allegiance to minimizing entropy and the realization that humans may be entirely replaced in this journey by their own technology.
I've been thinking a lot lately about the evolution of things. About technology. Nanotech. Artificial Life. Artificial Intelligence. And I was thinking that it's only a matter of time until some very highly advanced nanotech auto-replicators evolve from technology. It's also only a matter of time until very advanced artificial intelligence develops, and develops capabilities far exceeding human capabilities.
I also see no reason to suppose that humans will have control over these developments, past a certain point. Sure, we will try to make these things very friendly to life and to humans, and it may be in their best interests to be so. But I can think of all kinds of reasons why none of these developments would really care about humans or other types of life. I can think of situations where they would care about other life enough to not displace it too much, but I can't find any compelling reasons why this should necessarily happen.
I think some very advanced life forms will eventually emerge through technology. Life forms far more advanced than current Earth life, in terms of survivability and in their ability to evolve quickly and exploit energy resources efficiently. I don't think that current forms of life will really have much of a chance against these advanced forms.
This actually isn't very disturbing to me -- I sort of think it's a good thing. Survival of the fittest. We're all for it when we're the fittest. But how long will that be?
I love life. Not just my life. But Life. The whole concept. Things
mutating and adapting and competing. Weeding out the inferior. The whole process. I think it's great. I just think that future life may look back on us the way we look back on pre-cellular life. Interesting, yes. But only the first step. Just setting things up for the explosion. Ancestral, but still very primative.It should be interesting to see how things develop -- for as long as I can keep up, that is.
Re: Thinking about the future…
- Max More, 1996
The responses to David Musick's post run the gamut of AI safety debates and responses. Near-term AI risk discussions have changed a lot in response to the unexpected capabilities of generative AIs, but the long term existential risk discussions happening today are remarkably similar to what they were 30 years ago.
Max is responding to Anders Sandberg who claims it unlikely that AIs would outcompete us as David worries because they would occupy an different ecological niche.
If they need us for doing things physically, we would still have a strong position. Nevertheless, powerful SI's in the computer networks, could exert massive extortionary power, if they were so inclined. So I still think it important that SI researchers pay attention to issues of what values and motivations are built into SIs.
Re: Thinking about the future…
- Anders Sandberg, 1996
Anders responds by wondering whether we could just raise AIs as children to teach them human values.
I agree that it is important to think of what values or motivations SIs have, but they might be hard to code. I like David Brin's idea (in some of his short stories, like "Lungfish") that AI could be brought up a bit like human children and thus acquire human values. An AI living in the Net might have a very real chance of identifying with the struggle against net censorship (would limit it) and against Microsoft (might be incompatible to its inhomogeneous distributed mind).
Re: Thinking about the future…
- Dan Clemmensen, 1996
In response to QueeneMUSE who wonders what use the AIs would even find in destroying us, Dan Clemmensen hits upon the ideas of instrumental convergence, FOOM, and even the outlines of Eliezer's soon-to-be-created scenario of a computer bound superintelligence ordering nanomachines off of the internet.
It's likely that, if we can produce an SI, we can produce many SIs. However, my belief is that there is really only one relevant SI, and that is an SI whose motivation is to become more intelligent. This SI is the important one, because this is the one that has a built-in positive feedback mechanism. I also belief that this motivation is very likely to be a basic part of the first SI, almost by definition. The creator(s) of the first SI are likely to have this motivation themselves Otherwise, why create an SI? Further the SI may be a computer-augmented human or some other type of human-computer collaboration, in which case the SI is likely to include its creator, who surely has this motivation.
> Even if they could "use" us for manual labor- and what would we produce for them?More intelligence. We will be useful until the SI has direct control of manufacturing and connection of additonal computational capability. The SI will be able to "use" us just as we "use" each other: by contracting for services, either by letter or over the telephone. An SI embedded in the internet will have no difficulty arranging for valid credit card numbers, bank accounts, etc. I believe that an SI will be able to design and build whatever automated tools it needs in just this way, in a matter of a few days or weeks. Once the tools are available, it will no longer need humans to provide these services. At this point utility is no longer a reason for the SI to preserve humanity. However I hope the SI will derive some other reason, using its superior intelligence.
The SI may not want to destroy humanity. Humanity may simply be unworthy of consideration, and get destroyed as a trivial side effect of some activity of the SI. A simple example: the SI decides to maximized its computational speed by increasing its mass and converting the mass into computational units. It does this by gathering all the mass in the solar system (mostly the sun's mass) This is likely to be unhealthy for the humans. the SI may then decide to increase its computational speed by increasing its density, and convert all that mass into a neutron star. This is likely to be more unhealthy.
Re: Thinking about the future…
- Robin Hanson, 1996
Robin contends that worries about a singleton AI, like Clemmenson's scenario, won't apply in a more likely multi-polar AI world.
Sure, given one mean super-AI, and rest of the world far behind, we would be at its mercy. Similar fears come from one person in control of any vastly superior technology, be it nanotech, nukes, homemade black holes, whatever. But realistically, any one AI probably won't be too far ahead of any other AI, so they can police each other.
Re: Thinking about the future…
-Nick Bostrom, 1996
Bostrom responds by claiming a fast enough takeoff could inevitably lead to a singleton AI.
The issues are complex. To begin with, the claim that it is improbable that one AI would be far ahead of any other AI can be challenged. Suppose there are possible breakthroughs to be made in computer technology. Once an AI became sufficiently intelligent, it could think out a radical improvement to its design; which would make it more intelligent, allowing it to accelerate further; and so on. (I think this is Dan Clemmensen's view.) For example, the AI might be the first to make efficient use of nanotechnology. If nanotech has such potentials as Drexler thinks, access to a nanotech laboratory would be all the AI would need in order to take off. The contest would be over before anyone except the AI had realised it had begun.
What Is Intelligence
- Dan Clemmensen + Robin Hanson, 1996
This spawns a great conversation between Clemmensen and Hanson about whether computers will create significantly faster feedback loops than previous technologies.
Robin:
Sure computers are new, and will have new kinds of impacts. And at one time cars were new, trains were new, radio was new, etc. But the question here is, do they fundamentally change the nature of economic growth any more than these did? You have stated your belief in this, but have not offered any reasons.Dan:
Cars, Trains, and radio are not intelligence-augmemnting technologies in the same sense that computers are likely to be. This is the crux of my argument.Robin:
My point is, what is it about intelligence-augmenting, as opposed to communication-augmenting, transporation-augmenting, lifespan-augmenting, or any other X-augmenting new techology, that leads you to expect some special different effect on economic growth?Dan:
1) Assumption: the compuer component contributes non-trivially to the intelligence of the entity, once the entity comes into existance
2) Observation: computer hardware and software technology are advancing at an empirically-observed exponential rate known as "Moore's Law", which we both agreed is unlikely to change in the near future. This rate is dramatically faster than the 30-year sustained rates of increase of the other technologies you mention.
3) Assumption: a more-intelligent entity can develop intelligence-augmentation faster than a less-intelligent entity.
4) Conclusion: This is a fast-feedback loop.Robin:
Various measures of computer cost and speed have had a roughly constant growth rate over four decades (or more?) During this time the world economy has doubled (more than once?) the computer industry has doubled many times, and computers are lots more available. Thus this growth rate does not seem especially sensitive to the size of the world economy or the computer industry, or to computer availability. Or perhaps it is senstive, but such effects are just about canceled out by the computer design problem slowly getting harder.It sounds like you think this growth rate is sensitive to "researcher intelligence". Note though that the growth rate hasn't changed much even though computer companies now hire lots more researchers than they once did, because the industry is larger, and though they could have instead hired the same number of researchers that they did at the beginning but made sure they were the IQ cream of the crop. So that degree of IQ difference doesn't seem to make much difference in the growth rate. And note that the average researcher today is smarter than the average one 40 years ago, because humanity knows so much more. And note that researchers today have access to lots more computers than they once did, and that hasn't made much difference.
So you posit an effect of researcher IQ on computer growth rates that can not be substituted for by hiring more researchers, or by having them know more, and that is not very strong for small IQ differences. So at what level of increased IQ do you expect this effect to kick in? And you posit an effect of more computer access on IQ that we haven't seen yet. So at what level of computerization do you expect this effect to kick in? And why should we believe in either of these not-yet-observed effects?
The SI That Ate Chicago
-Eric Watt Forste, 1996
Or what about even super-human-in-every-way AIs still finding mutually beneficial trades with humans due to comparative advantage? This is an idea that I have thought about myself several times so I was humbled to see it discussed here 5 years before I was even born.
the SI will probably be a lot more intelligent (in terms of being able to figure out solutions to problems) if it cooperates with the existing body of human researchers than if it eats them.
(As usual in discussions of Robots That Will Want to Eat Us, I encourage all participants in the discussion to study the economic principle of Comparative Advantage and make sure they understand not only how it applies to trade between nations but also how it applies to trade between individuals.)
SI = Corporation?
- Lyle Burkhead, 1996
Another argument you've probably seen hashed out several times on Twitter or LessWrong is the usefulness of analogies between corporations and superintelligences. So it was on the Extropian forums in 1996.
My apologies to Nicholas Bostrom and Dan Clemmensen... but I was curious to see what this post would look like if you substitute "corporation" for "SI" throughout. Does it still make sense? What exactly is the difference between a corporation and an SI?
Re: Lyle’s Laws
-Hal Finney
Hal makes an interesting point about how the "discounted future returns" method of valuing stocks would lead to a sustained explosion in stock prices as the singularity approaches. He seems to take this literally but it could also be interpreted as a reductio ad absurdum.
Now consider how this model works as we approach an era where industrial production grows explosively, due to nanotech, or AI, or self-reproducing cheap robots, or some other breakthrough. This can cause returns from the stock to begin increasing at a rapid rate, even faster than the discount factor reduces them. The result could be that the infinite sum above diverges! The stock will in fact, today, have an infinite value to you.
Right now not many people would believe this reasoning. But as we approach a singularity era, and more and more people start to sense "great things just beyond the horizon", it will become more accepted that owning productive resources is going to be the ticket to future wealth beyond imagining. Stocks and other productive assets will become overwhelmingly attractive. We will see a sudden transition to a regime where everyone expends all other resources to buy stocks. The result would be a one-time transition, an explosion in stock prices.
I would expect the financial explosion to occur a few years to perhaps a few decades before the productivity explosion, depending on how farsighted investors are. Obviously political factors will be relevant too; if stocks are nationalized then ownership will be irrelevant. But the hugereturns possible can tolerate even a considerable degree of uncertainty in how likely they are to occur.
So actually a stock market explosion is not such a far fetched possibility. I doubt that we'll see it next year, though! But keep it in mind when you make your own financial plans.
Section 5: Singularity Worship and Memetics
1997: The Year of Fire.
- Eliezer Yudkowsky, 1997
In addition to personally believing in the cosmic importance of the technological singularity, Extropians were also interested in spreading their ideas to the masses. Inspired by and perhaps envious of the success of traditional religions, they sought to replicate them by crafting their own Extropian memes.
It was the Year of Fire.
It was the Beginning of the End.
It was the Year All Hell Broke Loose.1997: The leading edge of the Singularity.
Poem: Event Horizon
- MIchael Bowling, 1997
Event Horizon
Bathing in the Glory of the Orange Sun
I set my Soul on the Event Horizon
Motive Power to Overcome
Evolving through this Journey I have Become
Dancing across the line
That separates Earth and Sky
Rising
Expanding
I will never die!
Extropianism in the Memetic Ecosystem
- Lyle Burkhead, 1997
Lyle Burkhead references the old testament to explain why he thinks that "Extropianism amounts to the same thing as secular Judaism."
The vision of a transhuman condition goes all the way back to Isaiah. [65:20] “Never again will there be in it [the new Jerusalem] an infant who lives but a few days, or an old man who does not live out his years; he who dies at a hundred will be thought a mere youth; he who fails to reach a hundred will be considered accursed.”
Re: Upload motivations (was SPACE: Lunar Billboard?)
- Eliezer Yudkowsky, 1997
While not explicitly religious Eliezer's discussion is clearly very spiritual.
You haven't Doubted enough if your world is so secure that you don't care to know the *real* truth. When was the last time you confronted a Hard Problem... something so completely and blankly unanswerable that you couldn't even touch it? The Meaning of Life? The First Cause? Conscious experience? I could be wrong, oh Eric Watt Forste, but from what you have said your world is far too... not certain... *solid*. You blithely speak of answering the First Cause with the Anthropic Principle and say life has "many" meanings if asked why you get out of bed in the morning.
Doubting is only *mere* fun when you doubt only the deep surface issues like political conflicts and your emotional motivations. When your Doubting has reduced your entire goal system to a shambles, so that you cannot even walk across a room without wondering why, you will take Doubt a bit more seriously.
…
I want the Final Answers. The burning search for that takes precedence over everything else. What can I do without the Truth? How can I know what I'm doing or why I should be doing it without the Truth? Doubting isn't just a hobby or even just a way of life. It is an action taken with a specific and most final goal in mind: To Know everything that matters.
RELIGION: Death of a god and Deity Space
- James Rogers, 1996
James wonders what it would take to engineer a highly virulent religious meme.
When I look at these ideas today, it now becomes clear that deities are actually complex, highly evolved, virulent memes. The fact that so many gods have died throughout the history of man proves just how mortal *any* god is. This would also imply that *any* meme can achieve deitic status if it evolves to a certain level and in certain directions. A deity can be killed, but this requires killing a meme which has evolved extensive defenses against eradication.
On a side note, I wonder how hard it would be to artificially construct a true deitic meme? We talk about memetic engineering, but constructing a meme of this complexity and power would require significant thought and intelligence. Most religious memes seem to evolve as offspring from older now dead religious memes.
Conclusion
The Extropians Still Matter
When the Extropians began, their interests were little more than science fiction. Unlike most people, they were willing to deeply explore speculative technologies and even build a philosophy and community around them.
Now their world is real. Human level artificial intelligence, worldwide adoption of crypto-currencies, genetic engineering, nootropic enhancement, virulent memetics, pocket supercomputers, and advanced robotics. The Extropian's early thinking about these technologies is more relevant than ever.
There is Much More to Uncover
My exploration of the Extropian archives is incomplete. There are tens of thousands of posts on the archive that I did not read. And thousands more on other message boards like SL4 and Cypherpunks that I only cursorily read.
Despite the huge amount of content, these sources are still very information dense. Foundational connections to the modern day are common. Anyone who wants to better understand online intellectual culture should look to its origins in the Extropians.