The Atlantic, March 2012
Edited by Andy Ross
If future generations matter in proportion to their population numbers, then
existential risk mitigation has a much higher utility than pretty much
anything else that you could do. I think the biggest existential risks
relate to certain future technological capabilities that we might develop.
Machine intelligence or molecular nanotechnology could lead to new kinds of
weapons systems. It seems unlikely that any natural existential risks would
kill us all in the next hundred years.
An observation selection
effect is a selection effect introduced not by limitations in our
measurement instrument, but rather by the fact that all observations require
the existence of an observer. For instance, intelligent life evolved on
Earth. But the idea that therefore life is likely to evolve on most
Earth-like planets overlooks an observation selection effect. When it comes
to human extinction and existential risk, observation selection effects
might be relevant.
Think of yourself as if you were a randomly
selected observer of some larger reference class of observers. This applies
to the doomsday argument that we underestimate the probability that the
human species will perish soon. Compare two hypotheses about how long the
human species will last in terms of how many total people have existed and
will come to exist. One hypothesis is that a total of 200 billion humans
will have ever existed at the end of time, the other that 200 trillion
humans will have ever existed.
Estimating that there have been 100
billion humans so far, you get a probability shift in favor of the
hypothesis that only 200 billion humans will ever have existed. That's
because if you are a random sample of all the people who will ever have
existed, the chance that you will come up with a birth rank of 100 billion
is much larger if there are only 200 billion in total than if there are 200
trillion in total. You are unlikely to be among the very first people ever.
Human beings have been around for roughly a hundred thousand years, but
there are going to be new kinds of risks that haven't existed to this point
in human history that might give us the means to create new kinds of weapons
or new kinds of accidents. The fact that we've been around for a hundred
thousand years wouldn't give us much confidence with respect to those risks.
Any species anywhere will think of themselves as having survived up to the
current time because of the observation selection effect. You don't observe
yourself after you've gone extinct.
Existential risks include the
permanent destruction of our potential for desirable future development. Our
permanent failure to develop the sort of technologies that would
fundamentally improve the quality of human life would be an existential
catastrophe. I think there are vastly better ways of being than we humans
can currently reach and experience. We have fundamental biological
limitations. The world could be a lot better both in the transhuman way and
in more elementary ways. A permanent failure to realize better modes of
being is an existential risk.
Various developments in biotechnology
and synthetic biology are quite disconcerting. We are gaining the ability to
create designer pathogens and blueprints of various disease organisms are in
the public domain. We're also developing machines that can take a digital
blueprint as an input and print out the DNA string. Soon they will be able
to print out such viruses. So already there you have a kind of predictable
risk, and then once you can start modifying these organisms in certain kinds
of ways, there is a whole additional frontier of danger. There are also
different kinds of population control that worry me, such as
We are doing new things and there is a risk that
something could go wrong. Even with nuclear weapons, suppose it had turned
out that there was a way to make a nuclear weapon by baking sand in a
microwave oven or something like that. If it had turned out that way then
where would we be now? Presumably once that discovery had been made
civilization would have been doomed.
Perhaps we are in fact living in
a simulation as opposed to physical reality:
(1) Civilizations like ours
go extinct before reaching technological maturity.
civilizations lose interest in creating detailed ancestor simulations.
(3) We're almost certainly living in a computer simulation.
If (1) is
true, we will succumb to an existential catastrophe before reaching
technological maturity. If (1) is false, some civilizations at our stage
will reach technological maturity. If (2) is also false, some of those
mature civilizations retain an interest in creating ancestor simulations. A
technologically mature civilization could create astronomical numbers of
these simulations, many more than there were original histories. So almost
all observers with our types of experiences would be living in simulations.
If so, and if we are typical observers, (3) is true. We are living in a
computer simulation. We could be deleted.
How do we survive the
AR Professor Bostrom is director of the Future
of Humanity Institute in Oxford. I sent him a copy of my book
G.O.D. Is Great,
his range of interests rather closely, and he didn't reply. Thanks, Nick.