Singularity Summit 2009
Michael Vassar, Singularity Institute
Anna Salamon, Singularity Institute
Technical roadmap for whole brain emulation
Anders Sandberg, Future
of Humanity Institute
The time is now: As a species and as
individuals we need whole brain emulation
Technological convergence leading to
artificial general intelligence
Itamar Arel, University of Tennessee
Pathways to beneficial artificial general intelligence: Virtual
pets, robot children, artificial bioscientists, and beyond
Neural substrates of consciousness and the
'conscious pilot' model
Stuart Hameroff, University of Arizona
Quantum computing: What it is, what it is not, what we have yet to
DNA: Not merely the secret of life
Ned Seeman, New York University
Compression progress: The
algorithmic principle behind curiosity, creativity, art, science, music,
Juergen Schmidhuber, IDSIA
Conversation on the
Stephen Wolfram and Gregory Benford
David Chalmers, Australian National University
Choice machines, causality, and cooperation
Synthetic neurobiology: Optically engineering the brain to augment its
Ed Boyden, MIT Media Lab
Foundations of intelligent
Marcus Hutter, Australian National University
Cognitive ability: Past and future enhancements and implications
William Dickens, Northeastern University
The ubiquity and
predictability of the exponential growth of information technology
Ray Kurzweil, Kurzweil Technologies
More than Moore: Comparing
forecasts of technological progress
Bela Nagy, Santa Fe Institute
The "petaflop macroscope"
Gary Wolf, Wired Magazine
Collaborative networks in scientific discovery
How does society identify experts and when does it work?
Hanson, George Mason University
Artificial biological selection
Gregory Benford, University of California, Irvine
Critics of the Singularity
Ray Kurzweil, Kurzweil Technologies
The finger of AI: Automated electrical vehicles and oil independence
Brad Templeton, Electronic Frontier Foundation
and improvability of the human mind
Gary Marcus, New York University
Macroeconomics and Singularity
Peter Thiel, Clarium Capital
The Singularity and the Methuselarity: Similarities
Aubrey De Grey, SENS Foundation
biases and giant risks
Eliezer Yudkowsky, Singularity Institute
How much it matters to know what matters: A back of the envelope
Anna Salamon, Singularity Institute
Edited by Andy Ross
Singularity Hub, October 5, 2009
The Singularity Summit 2009, New York, October 3-4, was a resounding
success. Over 800 attendees crowded the venue at the 92 Street Y, and there
were over 30 speeches and panels in a row.
A welcome addition to the
program was a conversation between science-fiction author, and now chairman
of Genescient, Gregory Benford, and the developer of symbolic computation
system Mathematica, Stephen Wolfram.
Eliezer Yudkowsky illustrated
the unacceptable downsides of not properly dealing with the issues of the
singularity and of artificial general intelligence in his talk on cognitive
biases and giant risks.
Popular Science, October 3-4, 2009
Edited by Andy Ross
Open The Pod Bay Door, HAL
Ray Kurzweil's concept of the Singularity rests on two axioms: that
computers will become more intelligent than humans, and that humans and
computers will merge, allowing us access to that increased thinking power.
According to Anna Salamon, artificial intelligence greater than our own
is inevitable and dangerous. She argued that biological brains have finite
Salamon believes we will create super
computers to solve those problems for us. She worries that if humans and AI
have divergent goals, we could find ourselves in competition with the AI for
resources to achieve those different goals. Salamon advocates starting now
to ensure that human-assisting missions get hardwired into the basic
architecture of artificial intelligence.
Anders Sandberg believes
that engineers have to base their first attempts at AI on the human brain.
So the first artificial brain would contain elements of the personality of
the test subject that the artificial brain copied. These traits could become
locked into all artificial intelligence as the initial AI software
Just How's This Thing Gonna Work, Anyways?
How are we going to create artificial intelligence, and how are we going to
integrate ourselves with this advanced technology? Luckily, philosopher
David Chalmers was there to break it all down.
Chalmers rejected the
idea of brain emulation as the path to super-intelligent AI. He thinks that
we have to evolve AI by planting computer programs in a simulated
environment and selecting the smartest bots. Basically, set up the Matrix,
but with only artificial inhabitants.
To ensure that the resultant AI
adheres to pro-human values, we would have to set up a "leak-proof" world
where we control what goes in, and can prevent any of the artificial
consciousness from becoming aware of us too early and escaping.
Chalmers sees it, the second the artificial personalities become as smart as
us, they will emulate our leap by creating AI even smarter than themselves
inside their simulated world. Essentially, they will undergo their own,
This will start a chain reaction that quickly
leads to a digital intelligence far greater than anything we ever imagined.
Unless, of course, the first AI more intelligent than us uses that
additional foresight to realize creating intelligence greater than itself is
a bad idea, and cuts off the entire process.
But assuming that AI
does manage to get smarter than us, we will have to either integrate with
it, coexist with it as a lower form of life, or pull the plug. Chalmers sees
integration as the only way to go. He advocates physically replacing one
neuron at a time with a digital equivalent, while the person is awake, so as
to retain continuity of personality.
Supreme Mathematics of Gods and Earths
Stephen Wolfram believes that fundamental programs underlie all the behavior
in our universe, as well as many phenomena that don't exist in a universe
with our physics. These computations that Wolfram identifies as embedded in
reality exist independently of our observation.
Wolfram calls the
total set of all possible programs the computational universe. By running
mathematical experiments, examining the natural world and decoding the
behavior of reality, we can explore this universe, uncovering programs new
to humanity, but not new to the universe.
Wolfram likened these
programs to minerals like crystals and magnetic metals. He described a world
where scientists and mathematicians mine the computational universe for new
Wolfram identifies computerized computational mining as
the catalyst for the emergence of artificial intelligence. But he isn't
concerned that this AI will immediately threaten our extinction. After all,
the program only exists to find new knowledge. How could killing us help in
Not everyone at the conference bought this idea of a
benign artificial intelligence. Maybe this pervasive fear of AI-led
extermination just reflects our own inability to imagine a consciousness
without the aggressive need to destroy humanity.
Thus Spake Kurzweil
Kurzweil is the man everyone came to see. After the standing ovation died
down, the auditorium reached its quietest point yet, as the collected
skeptics, crazies, and disciples waited to hear from the first prophet of
Kurzweil shored up the faithful, calming any doubts they
had about the sheer ambition of his claims, and presenting even stronger
evidence that the Singularity is inevitable and impending. The blatant
clarity and simplicity of his argument and evidence left no doubts about
Kurzweil's profound intellect.
Since 1890, computing power has become
a trillion times more powerful, and a billion times more powerful in the
last 25 years. A single computer will equal the storage capacity and speed
of the human brain by around 2029. And once a computer can map out every
single neuron, connection, and firing of a brain, someone will make a
Kurzweil didn't convince me that a digital brain
will spontaneously assume human-like consciousness and self-awareness. In
fact, he didn't convince me that anyone had even the slightest clue as to
what will really happen once we cross that threshold.
I was reminded
of the Human Genome Project, which assumed that once every gene got mapped
out, it would be easy to put a person together from scratch. Now the simple
theory that DNA codes, RNA prints, and protein acts seems increasingly
simplistic and naive.
Neuroscience will soon start revealing similar
complexities in the brain. And as the process of consciousness proves more
and more intricate, the computing power needed to reproduce it will rise and
rise, pushing back the date of the Singularity.
Kurzweil is on to
something. But no knows what that something is, or when it will really be
What Does a Beer Taste Like After the Singularity?
By Glenn Derene
Popular Mechanics, October 5, 2009
Edited by Andy Ross
Imagine a techno-futurist rapture when artificial intelligence becomes
smarter than human intelligence, and computers are able to improve and
refine themselves at an accelerated pace. Many singularians support the idea
of uploading one's consciousness to the machines. With your mind freed into
the digital realm, you will be immortal.
Gary Marcus, director of the
NYU Center for Child Language, described the human mind as an evolutionary
kluge, producing a wonderfully refined sense of vision, but a terribly
deficient cue-based memory recall system. Altering the mind with a more
logical computer system could reduce errors and improve our basic capacity
But I think this stuff is a lot harder than these folks
make it out to be. Before we can decide what we want from artificial
intelligence, we need to figure out just what we mean when we describe human
If any computer becomes self-aware enough to start
refining its own intelligence at an accelerated rate, I'm going to unplug
that thing and take a baseball bat to it.
Will Our Robot Overlords Be Friendly?
Ronald Bailey, October 6, 2009
Edited by Andy Ross
The Singularity Institute for Artificial Intelligence (SIAI) was created to
address the urgent problem of how to create super-smart AIs that are
friendly to us. Smarter intelligences might choose to get rid of us because
our matter is not optimally arranged to achieve their goals.
Sandberg offered a technical roadmap for whole brain emulation. He argued
that it is possible to foresee how developments in neuroscience and computer
science are leading to emulation of specific people's brains. Sandberg
believes that emulating a human brain is only 20 years away. But we don't
know if a one-to-one emulation would produce a mind or not.
Koene argued that the time is now to go after mind uploading. Radically
increasing human longevity solves a few problems, but doesn't deal with our
dependence on changeable environments and scarce resources. Nor does it deal
with our intellectual limitations, death by traumatic accidents, or disease.
David Chalmers argued that personal identity would be maintained if the
functional organization of the upload was the same as the original. In
addition, gradual uploading might also be a way to maintain personal
identity. Chalmers also speculated about reconstructive uploading in which a
super-smart AI would scour the world for information about me, then
instantiate that information in the appropriate substrate. On the optimistic
view, being reconstructed from the informational debris you left behind
would be like waking up from a long nap.
Ray Kurzweil envisions
integration between humans and their neural prosthetics. Over time, more and
more of the neural processing that makes us who we are will be located
outside our bodies and brains so that uploading will take place gradually.
Our uploaded minds will function much faster and more precisely than our
meat minds do today. We will join the singularity as our artificial parts
Peter Thiel asked us to vote on which of seven
scenarios we're most worried about:
— Singularity happens and robots
kill us all (the Skynet scenario)
— Biotech terrorism using something
more virulent than smallpox and Ebola combined
— Nanotech grey goo
escapes and eats up all organic matter
— Israel and Iran engage in
thermonuclear war that goes global
— A one-world totalitarian state
— Runaway global warming
— The singularity takes too long to
Aubrey de Grey says progress in regenerative medicine could
develop faster than a person approaches death from aging.
Salamon argued that an intelligence explosion can't be controlled once it
Al Fin, October 7, 2009
The singularity is only the latest of names for an idea that has been around
for millennia. The singularity movement can be seen as a religious faith.
The combination of religious faith and goal orientation with scientific and
engineering rigor can lead to something truly amazing.
My selections from an IEEE
Spectrum Special Issue on the Singularity
Kurzweil says anyone alive
in 2040 or so could be close to immortal