IEEE SPECIAL REPORT
THE SINGULARITY
IEEE Spectrum, June 2008
Reports below edited by Andy Ross
Waiting for the Rapture
By Glenn
Zorpette
The singularity has been called the rapture of the
geeks.
The singularity is supposed to begin shortly after engineers
build the first computer with greater-than-human intelligence. That
achievement will trigger a series of cycles in which superintelligent
machines beget even smarter machine progeny, going from generation to
generation in weeks or days rather than decades or years. This will spark
explosive economic growth and a technoindustrial rampage that will suck us
beyond the event horizon.
But the singularity is much more than a
sci-fi subgenre. In the coming years, as computers become stupendously
powerful and as electronics and other technologies begin to enhance and fuse
with biology, life really is going to get more interesting.
So we
invited articles from people who impressed us with their achievements and
writings on subjects central to the singularity idea. On consciousness, we
have John Horgan. We also have Christof Koch and Giulio Tononi,
neuroscientists who specialize in consciousness. Rodney Brooks, from MIT,
weighs in on the future of machine intelligence. For the last word, we
turned to Vernor Vinge, who launched the singularity movement in 1993.
That movement has evolved since then into an array of competing
hypotheses and scenarios. But central to them all is the idea of a conscious
machine.
Consciousness seems mystical and inextricably linked to
organisms. What happens in the cerebral cortex that turns objective
information into subjective experience? We dont know, but we will someday.
No one argues that consciousness arises from anything but biological
processes in the brain. The brain is a computer.
The Consciousness Conundrum
By John
Horgan
I would love to believe that we are rapidly approaching
the singularity. Technological singularity comes in many versions, but most
involve bionic brain boosting. At first, we'll become cyborgs. Eventually,
we will abandon our flesh-and-blood selves entirely and upload our digitized
psyches into computers. We will then dwell happily forever in cyberspace.
Intoxicated by the explosive progress of information technologies,
singularitarians foresee a "merger of biological and nonbiological
intelligence," as Ray Kurzweil puts it, that will culminate in "immortal
software-based humans." It will happen not within a millennium, or a
century, but no later than 2030, according to Vinge.
Neuroscientists
still do not understand at all how a brain makes a conscious mind. "No one
has the foggiest notion," says the neuroscientist Eric Kandel of Columbia
University Medical Center, in New York City.
Gerald Edelman, a Nobel
laureate and director of the Neurosciences Institute, in San Diego, says
singularitarians vastly underestimate the brain's complexity. A healthy
adult brain contains about 100 billion neurons. A single neuron can be
linked via axons (outputs) and dendrites (inputs) across synapses (gaps
between axons and dendrites) to as many as 100 000 other neurons. A typical
human brain has quadrillions of connections among its neurons. Adding to the
complexity, synaptic connections constantly form, strengthen, weaken, and
dissolve.
Nevertheless, the brain is a computer. Neurons resemble
transistors, processing electrochemical pulses known as action potentials.
With an amplitude of one-tenth of a volt and a duration of one millisecond,
action potentials are remarkably uniform. Also called spikes, action
potentials serve as the brain's basic units of information.
If the
brain contains one quadrillion synapses processing on average 10 action
potentials per second, then the brain performs 10 quadrillion operations per
second. If current trends continue, supercomputers will exceed 10
quadrillion operations per second within a decade.
Intelligence
requires software as much as hardware. In the next couple of decades,
scientists will reverse engineer the brain's software. First, the brain's
programming tricks will be transferred to computers to make them smarter.
Eventually, our personal software will be extracted from our frail flesh and
uploaded into advanced robots or computers.
Neuroscientists suspect
that the brain employs a temporal code, in which information is represented
not just in a cell's rate of firing but also in the precise timing between
spikes. Biophysicist William Bialek of Princeton University calculates that
temporal coding would boost the brain's information-processing capacity
close to the Shannon limit.
Edelman has advocated a scheme called
neural Darwinism, in which our recognition of, say, an animal emerges from
competition between large populations of neurons representing different
memories. The brain quickly settles on the population that most closely
matches the incoming stimulus.
Wolf Singer of the Max Planck
Institute for Brain Research, in Frankfurt, has won more support for a code
involving many neurons firing at the same rate and time. Singer thinks such
synchronous oscillations might play a crucial role in cognition and perhaps
even underpin consciousness.
In 1990, the late Nobel laureate Francis
Crick and his colleague Christof Koch proposed that 40-hertz synchronized
oscillations were one of the neuronal signatures of consciousness. But
Singer says the brain probably employs many different codes in addition to
oscillations. He also emphasizes that researchers are "only at the beginning
of understanding" how neural processes "bring forth higher cognitive and
executive functions."
Singer calls the idea of an imminent
singularity science fiction. Koch shares Singer's skepticism. There may be
no universal principle governing neural-information processing, Koch says.
Researchers at the University of Southern California, in Los Angeles,
have designed chips that mimic the firing patterns of tissue in the
hippocampus, a neural structure thought to underpin memory. Biomedical
engineering professor Theodore Berger, a leader of the USC program, has
suggested that one day brain chips might allow us to instantly upload
expertise.
Andrew Schwartz, a neural-prosthesis researcher at the
University of Pittsburgh, has shown that monkeys can learn to control
robotic arms by means of chips embedded in the brain's motor cortex. But no
one has any idea how memories are encoded, Schwartz says.
That brings
us to the interface problem. For now, electrodes implanted into the brain
remain the only way to precisely observe and fiddle with neurons. It is a
much messier, more difficult, and more dangerous interface than most people
realize. The electrodes must be inserted into the brain through holes
drilled in the skull, posing the risk of infection and brain damage.
Researchers are testing various strategies for improving contact between
neurons and electronics. They are making electrodes out of conducting
polymers, coating electrodes with natural cell-adhesion molecules, and
designing arrays of electrodes that automatically adjust their position.
At Caltech and elsewhere, engineers have designed hollow electrodes that
can inject fluids into the surrounding tissue. The fluids could consist of
nerve-growth factors, neurotransmitters, and other substances.
Neuroscientists are also testing optical devices and genetic switches.
Terry Sejnowski, a neuroscientist at the Salk Institute for Biological
Studies, in San Diego, says the new technologies will make it possible "to
selectively activate and inactivate specific types of neurons and synapses
as well as record from all the neurons in a volume of tissue."
Even
singularitarians concede that no existing interface can provide what is
required for bionic convergence and uploading. So they predict that current
interfaces will soon yield to nanobots. They infiltrate the brain, then
record all neural activity and manipulate it by zapping neurons, tinkering
with synaptic links, and so on. The nanobots will be equipped with some sort
of Wi-Fi so that they can communicate with one another as well as with
electronic systems inside and outside the body.
Steven Rose, a
neurobiologist at England's Open University, says a lot can be done to
improve the brain's performance through improved drugs, neural prostheses,
and perhaps genetic engineering. But he calls the claims about imminent
consciousness uploading "pretty much crap."
Rose disputes the
singularitarians' contention that computers will soon surpass the brain's
computational capacity. He suspects that computation occurs at scales above
and below the level of individual neurons and synapses, via genetic,
hormonal, and other processes. So the brain's total computational power may
be many orders of magnitude greater than what singularitarians profess.
Rose also rejects the basic premise of uploading, that our psyches
consist of nothing more than algorithms that can be transferred from our
bodies to entirely different substrates, whether silicon or glass fibers or
quantum computers. The information processing that constitutes our selves,
Rose asserts, evolved within a social, crafty, emotional, sex-obsessed
flesh-and-blood primate.
If the brain were simple enough for us to
understand, we wouldn't be smart enough to understand it.
The
singularity is a religious rather than a scientific vision. It is the
rapture for nerds.
Can Machines Be Conscious?
By
Christof Koch and Giulio Tononi
In some quarters it is taken for
granted that within a generation, human beings will have an alternative to
death: being a ghost in a machine. You'll be able to upload your mind to a
computer. And once you've reduced your consciousness to patterns of
electrons, others will be able to copy it, edit it, sell it, or pirate it.
It might be bundled with other electronic minds. And, of course, it could be
deleted.
Some of the most brilliant minds in human history have
pondered consciousness. We know it arises in the brain, but we don't know
how or where in the brain.
Nevertheless, some in the singularity
crowd are confident that we are within a few decades of building a computer
that can experience things as we do. It might be a robot. Or it might just
be software a huge, ever-changing cloud of bits that inhabit an immensely
complicated and elaborately constructed virtual domain.
We are among
the few neuroscientists who have devoted a substantial part of their careers
to studying consciousness. Our work has given us a unique perspective on
whether consciousness will ever be artificially created.
We think it
will, eventually. But perhaps not in the way that the most popular scenarios
have envisioned it.
Consciousness is part of the natural world. It
depends, we believe, only on mathematics and logic and on the imperfectly
known laws of physics, chemistry, and biology. It does not arise from some
magical or otherworldly quality.
In humans and animals, we know that
the specific content of any conscious experience is furnished by parts of
the cerebral cortex. If a sector of the cortex is destroyed, the person will
no longer be conscious of whatever aspect of the world that part of the
brain represents. To be conscious also requires the corticothalamic system
to be constantly suffused in a bath of substances known as neuromodulators,
which aid or inhibit the transmission of nerve impulses.
Much of what
goes on in the brain has nothing to do with being conscious. Widespread
damage to the cerebellum, the small structure at the base of the brain, has
no effect on consciousness. Neural activity obviously plays some essential
role in consciousness but in itself is not enough to sustain a conscious
state.
Clinical studies and basic research have given us a complex if
still rudimentary understanding of the myriad processes that give rise to
consciousness. We are still a very long way from being able to use this
knowledge to build a conscious machine. Yet we can list some aspects of
consciousness that are not strictly necessary for building such an artifact.
Consciousness does not seem to require emotions, memory,
self-reflection, language, sensing the world, and acting in it. When we
dream, we are virtually disconnected from the environment but we are
conscious, and the corticothalamic system continues to function more or less
as it does in wakefulness.
So although being conscious depends on
brain activity, it does not require any interaction with the environment.
Whether the development of consciousness requires such interactions in early
childhood is a different matter.
Being conscious does not require
emotion. People who've suffered damage to the frontal area of the brain may
exhibit a flat, emotionless affect. But they still experience the sights and
sounds of the world much the way normal people do.
Primal emotions
are useful and perhaps even essential for the survival of a conscious
organism. Likewise, a conscious machine might rely on emotions to make
choices and deal with the complexities of the world. But it could be just a
calculating engine, and yet still be conscious.
Psychologists argue
that consciousness requires selective attention. When you pay attention to
something, you become conscious of that thing and its properties. When your
attention shifts, the object fades from consciousness.
Nevertheless,
a person can consciously perceive an event or object without paying
attention to it. Conversely, people can attend to events or objects without
consciously perceiving them.
Episodic memory would seem to be an
integral part of consciousness. But being conscious does not require either
explicit or working memory.
Self-reflection is another human trait
that seems deeply linked to consciousness. To assess consciousness,
psychologists and other scientists often rely on verbal reports from their
subjects. But being conscious does not require self-reflection. When we
become absorbed in some intense perceptual task, we are vividly conscious of
the external world without any need for reflection or introspection.
Neuroimaging studies suggest that we can be vividly conscious even when the
front of the cerebral cortex, involved in judgment and self-representation,
is relatively inactive. Patients with widespread injury to the front of the
brain demonstrate serious deficits but they appear to have nearly intact
perceptual abilities.
Finally, being conscious does not require
language. There are many patients who lose the ability to understand or use
words and yet remain conscious. And infants, monkeys, dogs, and mice cannot
speak.
We assume that a machine does not require anything to be
conscious that a naturally evolved organism doesn't require. A conscious
machine does not need to engage with its environment, or have long-term
memory or working memory, or attention, self-reflection, language, or
emotion.
So what are the essential properties of consciousness?
We think the answer has to do with the amount of integrated information
that an organism, or a machine, can generate. Information is classically
defined as the reduction of uncertainty that occurs when one among many
possible outcomes is chosen.
But conscious experience consists of
more than just differentiating among many states. Consider an idealized
1-megapixel digital camera. Even if each photodiode in the imager were just
binary, the number of different patterns that imager could record is 2
raised to the power 1 million. Yet the camera is obviously not conscious.
We think that the difference between you and the camera has to do
with integrated information. The 1-megapixel sensor chip isn't a single
integrated system but rather a collection of one million individual,
completely independent photodiodes, each with a repertoire of two states.
And a million photodiodes are collectively no smarter than one photodiode.
By contrast, the repertoire of states available to you cannot be
subdivided. You know this from experience. Underlying this unity is a
multitude of causal interactions among the relevant parts of your brain. And
unlike chopping up the photodiodes in a camera sensor, disconnecting the
elements of your brain that feed into consciousness would have profoundly
detrimental effects.
To be conscious,you need to be a single
integrated entity with a large repertoire of states. Your level of
consciousness has to do with how much integrated information you can
generate.
The integrated information theory of consciousness, or IIT,
is grounded in the mathematics of information and complexity theory and
provides a specific measure of the amount of integrated information
generated by any system comprising interacting parts. We call that measure F
and express it in bits. The larger the value of F, the larger the entity's
conscious repertoire. F is an intrinsic property of the system, different
from the Shannon information that can be sent through a channel.
IIT
suggests a Turing Test for consciousness. According to IIT, consciousness
implies the availability of a large repertoire of states belonging to a
single integrated system. To be useful, those internal states should also be
highly informative about the world. One test would be to ask the machine to
describe a scene in a way that efficiently differentiates the scene's key
features from the immense range of other possible scenes. Humans are
fantastically good at this.
So we can test for machine consciousness
by showing it a picture and asking it for a concise description. The machine
should be able to extract the gist of the image and what's happening. The
machine should also be able to describe which objects are in the picture and
which are not, as well as the spatial relationships among the objects and
the causal relationships.
No machine or program comes close to
pulling off such a feat today. In fact, image understanding remains one of
the great unsolved problems of artificial intelligence.
To build a
conscious machine, we can either copy the mammalian brain or evolve a
machine. Research groups worldwide are already pursuing both strategies.
The Association for the Scientific Study of Consciousness Executive
director: Christof Koch, professor of cognitive and behavioral biology at
Caltech President-elect: Giulio Tononi, professor of psychiatry at the
University of Wisconsin, Madison
Consciousness as Integrated Information
By Giulio
Tononi
The integrated-information theory of consciousness, or
IIT, is an attempt to approach consciousness from first principles.
IIT introduces a measure of integrated information, represented by the
symbol F and given in bits, that quantifies the reduction of uncertainty.
This measure is above and beyond the information generated independently
within the parts themselves.
A system with a positive value of F is
called a complex. When a complex enters a particular state of its
repertoire, it generates an amount of integrated information corresponding
to F. Thus, a simple photodiode that can detect the presence or absence of
light is a complex with F = 1 bit. The sensor chip of a digital camera, on
the other hand, would not form a complex: as such it would have F = 0 bits,
as each photodiode does its job independently of the others. In principle,
it can be decomposed into individual photodiodes, each with F = 1 bit.
Within the awake human brain, on the other hand, there must be some
complex whose F value is on average very high, corresponding to our large
repertoire of conscious states that are experienced as a whole. Because
integrated information can be generated only within a complex and not
outside its boundaries, it follows that consciousness is subjective and
related to a single point of view or perspective.
Given the vast
number of ways even a simple information-processing system can be
decomposed, measuring F can be done only for very simple systems. Also, the
value of F depends on both spatial and temporal scales that determine what
counts as elements and states of a system.
With the aid of computer
simulations, one can try out different networks and see which architectures
yield high values of F. Such simulations indicate that high F requires
networks that combine functional specialization with functional integration
so that each element has a unique function within the network and there are
many pathways for interactions among the elements. In very rough terms, this
kind of architecture describes the mammalian thalamocortical system. The
thalamocortical system is the part of the brain that cannot be severely
impaired without loss of consciousness.
Conversely, F is low for
systems made up of small, quasi-independent modules. This suggests that
parts of the brain that are organized in an extremely modular manner, where
modules hardly interact, should not contribute directly to consciousness.
The cerebellum offers an example. If the cerebellum is damaged,
consciousness is largely unaffected. Although the cerebellum is a powerful
computer, it is the wrong machine for consciousness, being far too modular
to generate much integrated information.
Computer simulations also
indicate that parallel input or output pathways can be attached to a complex
of high F without becoming part of it. This may explain, for example, why
the retina, which is connected to the visual cortex by multiple parallel
pathways, does not directly contribute to visual consciousness.
Simulations show that a complex of high F can be augmented by attaching
local circuits to some of its elements and yet the attached circuits may
remain outside of the high F complex. In the brain, it appears that many
computations that remain unconscious are carried out by cortical and
subcortical circuits that appear to be informationally insulated.
Consciousness is associated with neural architectures that form a a single
entity having a large repertoire of states. It seems that one way to achieve
high values of F is to build a network that is both highly specialized and
highly integrated.
According to this theory, F, and therefore
conscious experience, is graded. It is not an all-or-none property that only
sufficiently complex systems possess. Any physical system with some capacity
for integrated information would have some degree of conscious experience.
Here we have only our first-person evidence to fall back on. Practically
speaking, we can think of the F value associated with dreamless sleep or
general anesthesia as a threshold for consciousness. Anything with a smaller
F is no more conscious than you or I during such a dreamless state.
Do You Need a Quantum Computer to Achieve Machine Consciousness?
By
Christof Koch
Oxford University cosmologist Roger Penrose has
surmised that a yet-to-be-discovered quantum theory of gravity lies at the
core of consciousness. If that is true, you would not be able to upload your
consciousness into a classical machine. It would have to be a machine that
exploited quantum entanglement at the level of its elementary gates. You'd
need a quantum computer, with the processing and memory capacity of a human
brain.
However, there is no compelling evidence that brains exploit
any of the special features of quantum mechanics. The components of the
nervous system would make it very difficult to retain entangled states, or
qubits, over the necessary spatial-temporal dimensions. It is likely that to
simulate or emulate brain-based functions, including consciousness,
computers built out of classical, nonquantum gates will suffice.
I Am a Robot
By Rodney
Brooks
I am a machine. So are you. We are really sophisticated
machines made up of billions and billions of biomolecules that interact
according to rules deriving from physics and chemistry.
If we really
are machines and if we learn the rules governing our brains, then in
principle there's no reason why we shouldn't be able to replicate those
rules in, say, silicon and steel. I believe our creation would exhibit
genuine human-level intelligence, emotions, and even consciousness.
One day we will create an artificial general intelligence, or AGI. Some
researchers believe that AGIs will undergo a positive-feedback
self-enhancement until their comprehension of the universe far surpasses our
own. Our world, those individuals say, will change in unfathomable ways
after such superhuman intelligence comes into existence, an event they refer
to as the singularity.
Ray Kurzweil and his colleagues believe that
this super AGI will be created either through ever-faster advances in
artificial intelligence or by more biological means, such as "direct
brain-computer interfaces, biological augmentation of the brain, genetic
engineering," and "ultrahigh-resolution scans of the brain followed by
computer emulation." They think it will happen sometime in the next two or
three decades.
Some singularitarians believe our world will become a
kind of techno-utopia, with humans downloading their consciousnesses into
machines to live a disembodied, after-death life. Others anticipate a kind
of techno-damnation in which intelligent machines will be in conflict with
humans, maybe waging war against us. Their arguments are plausible, but
plausibility is by no means certainty.
I don't think there is going
to be one single sudden technological "big bang" that springs an AGI into
"life." Starting with the mildly intelligent systems we have today, machines
will become gradually more intelligent, generation by generation. The
singularity will be a period, not an event.
This period will
encompass a time when we will invent, perfect, and deploy ever more capable
systems, driven by the usual economic and sociological forces. Eventually,
we will create truly artificial intelligences, with cognition and
consciousness recognizably similar to our own. I strongly suspect it won't
happen before 2030.
But I expect the AGIs of the future embodied as
robots to emerge gradually and symbiotically with our society. At the same
time, we humans will transform ourselves. We will incorporate a wide range
of advanced sensory devices and prosthetics to enhance our bodies. As our
machines become more like us, we will become more like them.
We have
many very hard problems to solve before we can build anything that might
qualify as an AGI. Many problems have become easier as computer power has
increased on its exponential and seemingly inexorable way. But we also need
fundamental breakthroughs.
Consider four basic capabilities that any
true AGI would have to possess: The object-recognition capabilities of
a 2-year-old child The language capabilities of a 4-year-old child
The manual dexterity of a 6-year-old child The social understanding of
an 8-year-old child
What if there are some essential aspects of
intelligence that we still do not understand and that do not lend themselves
to computation? We might need a new conceptual framework. Creating a machine
capable of effectively performing the four capabilities above may take 10
years, or it may take 100.
My early work on robotic insects showed me
the importance of coupling AI systems to bodies. I don't see why, by the
middle of this century, we shouldn't have humanoid robots with agile legs
and dexterous arms and hands.
I believe the AGIs of the future will
not only be able to act intelligently but also convey emotions, intentions,
and free will. In fact, one of my dreams is to develop a robot that people
feel bad about switching off, as if they were extinguishing a life.
Maybe some kind of AGI already exists on the Google servers, probably the
single biggest network of computers on our planet, and we aren't aware of
it. At the 2007 Singularity Summit, I asked Peter Norvig, Google's chief
scientist, if the company had noticed any unexpected emergent properties in
its network. He replied that they had not seen anything like that.
Will machines become smarter than us and decide to take over? I don't think
so. To begin with, there will be no "us" for them to take over from. We are
already starting to change ourselves from purely biological entities into
mixtures of biology and technology. We are more likely to see a merger of
ourselves and our robots before we see a standalone superhuman intelligence.
While we become more robotic, our robots will become more biological,
with parts made of artificial and yet organic materials. In the future, we
might share some parts with our robots.
We need not fear our machines
because we will always be a step ahead of them, because we will adopt the
new technologies used to build those machines right into our own heads and
bodies.
Rodney Brooks is a professor at the Massachusetts Institute
of Technology. He researches the engineering of intelligent robots capable
of operating in real-world environments and how to understand human
intelligence by building humanoid robots. Brooks is also the chief technical
officer of iRobot Corp.
Ray Kurzweil and Neil Gershenfeld
By Tekla
S. Perry
MIT professor Neil Gershenfeld and technology futurist
Ray Kurzweil have long worked at the leading edges of physical science and
computer science. Both believe that we are on the event horizon of a
technological singularity.
David Dalrymple, now age 16 and an MIT
graduate student, began corresponding with Gershenfeld in 1999. Later that
year, Gershenfeld invited him to a White House event to demonstrate a device
he had built using Lego Mindstorms. There Dalrymple met Kurzweil. Dalrymple
worked with Kurzweil for three summers as an undergraduate. He graduated at
age 13 and is now working toward his Ph.D. under Gershenfeld.
Gershenfeld, director of MIT's Center for Bits and Atoms, studies the
boundary between computer science and physical science. Kurzweil has been
fascinated with modeling the physical world in computers, and believes he
may just survive long enough to see computers that are far smarter than
people.
For years, Dalrymple has been trying to reconcile these two
visions of the future: Gershenfeld's future in which computers collapse and
simply become part of reality, and Kurzweil's future in which reality as we
know it collapses and simply becomes part of computers.
Ray
Kurzweil
"We see these apparently opposing trends in many
contexts. Studying natural intelligence gives us the insights to create
artificial intelligence while at the same time artificial intelligence is
extending our natural intelligence. Reverse-engineering biology is giving us
creative new designs for advanced technologies, while those same
technologies overcome the limitations of biology."
"We will be
infusing physical reality with embedded, distributed, self-organizing
computation everywhere. And at the same time we will be using these massive
and exponentially expanding computational resources to create increasingly
realistic, full-immersion virtual reality environments that compete with and
ultimately replace real reality."
Neil Gershenfeld
"I had always considered Ray and me to be headed in opposite directions.
He developed artificial intelligence and virtual worlds while I was
interested in the natural intelligence of physical systems. He forecast the
future while I was investigating technologies that are possible in the
present."
"The result for me has been an increasingly close
integration of physical science and computer science, bringing the
programmability of the digital world to the physical world. But whether
computers are merged with reality or reality is merged with computers, the
result is the same: the boundary between bits and atoms disappears."
Signs of the Singularity
By Vernor
Vinge
I think it's likely that with technology we can in the
fairly near future create or become creatures of more than human
intelligence. Such a technological singularity would revolutionize our
world, ushering in a posthuman epoch. In my 1993 essay, "The Coming
Technological Singularity," I said I'd be surprised if the singularity had
not happened by 2030. I'll stand by that claim, assuming we avoid
showstopping catastrophes.
I expect the singularity will come as some
combination of the following: The AI Scenario: We create superhuman
artificial intelligence (AI) in computers. The IA Scenario: We enhance
human intelligence through human-to-computer interfaces to achieve
intelligence amplification (IA). The Biomedical Scenario: We directly
increase our intelligence by improving the neurological operation of our
brains. The Internet Scenario: Humanity, its networks, computers, and
databases become sufficiently effective to be considered a superhuman being.
The Digital Gaia Scenario: The network of embedded microprocessors becomes
sufficiently effective to be considered a superhuman being. Philosopher
Alfred Nordmann criticizes the extrapolations used to argue for the
singularity. Using trends for outright forecasting is asking for
embarrassment. And yet there are a couple of trends that at least raise the
possibility of the technological singularity. The first is a very long-term
trend, namely life's tendency toward greater complexity.
In the last
few thousand years, humans have begun the next step, creating tools to
support cognitive function. We're building machines and systems that can
speed up the processes of problem solving and adaptation.
In recent
decades, the enthusiasts have been encouraged by an enabling trend: the
exponential improvement in computer hardware. If the economic demand for
improved hardware continues, it looks like Moore's Law can continue for some
time. Moore's Law enables improvement in communications, embedded logic,
information storage, planning, and design. As long as the software people
can successfully exploit Moore's Law, the demand for this progress should
continue.
Roboticist Hans Moravec may have been the first to draw a
numerical connection between computer hardware trends and artificial
intelligence. Writing in 1988, Moravec took his estimate of the raw
computational power of the brain together with the rate of improvement in
computer power and projected that by 2010 computer hardware would be
available to support roughly human levels of performance.
Rodney
Brooks suggests that computation may not even be the right metaphor for what
the brain does. If we are profoundly off the mark about the nature of
thought, then this objection could be a showstopper. But research that might
lead to the singularity covers a much broader range than formal computation.
Consider economist Robin Hanson's "shoreline" metaphor for the boundary
between those tasks that can be done by machines and those that can be done
only by human beings. Once upon a time, there was a continent of human-only
tasks. By the end of the 1900s, that continent had become an archipelago. We
might recast much of our discussion in terms of the question, "Is any place
on the archipelago safe from further inundation?"
The goal of
enhancing human intelligence through human-computer interfaces (the IA
Scenario) is near. Today a well-trained person with a suitably provisioned
computer can look very smart indeed. Consider just a slightly more advanced
setup, in which an Internet search capability plus math and modeling systems
are integrated with a head-up display. The resulting overlays could give the
user a kind of synthetic intuition about his or her surroundings.
The
Biomedical Scenario directly improving the functioning of our own brains
has a lot of similarities to the IA Scenario, though computers would be only
indirectly involved, in support of bioinformatics. In the near future, drugs
for athletic ability may be only a small problem compared with drugs for
intellect.
Brooks suggests that the singularity might happen and yet
we might not notice. I think a pure Internet Scenario, where humanity plus
its networks and databases become a superhuman being, is the most likely to
leave room to argue about whether the singularity has happened or not. In
this future, there might be no explicit evidence of a superhuman player.
The Digital Gaia Scenario would probably be less deniable, if only
because of the palpable strangeness of the everyday world: reality itself
would wake up. The Digital Gaia would be something beyond human
intelligence, but nothing like human. Digital Gaia is a hint of how alien
the possibilities are.
The best answer to the question, "Will
computers ever be as smart as humans?" is probably "Yes, but only briefly."
Consciousness, intelligence, self-awareness, emotion even their
definitions have been debated since forever. Now there is the possibility of
making progress with these mysteries. Some of the hardest questions may be
ill-posed, but we should see a continuing stream of partial answers and
surprises. Each partial success is removing more dross, closing in on the
ineffable features of mind. Of course, we may remove and remove and find
that ultimately we are left with nothing but devices that are smarter than
we are and that is the singularity.
Vernor Vinge first used the
term singularity to refer to the advent of superhuman intelligence while on
a panel at the annual conference of the Association for the Advancement of
Artificial Intelligence in 1982. Three of his books won the Hugo Award for
best science-fiction novel of the year. From 1972 to 2000, Vinge taught math
and computer science at San Diego State University.
Tech Luminaries Address the Singularity
Douglas Hofstadter
Pioneer in computer modeling of
mental processes; director of the Center for Research on Concepts and
Cognition at Indiana University, Bloomington; winner of the 1980 Pulitzer
Prize for general nonfiction.
"It might happen someday, but I think
life and intelligence are far more complex than the current singularitarians
seem to believe, so I doubt it will happen in the next couple of centuries.
[The ramifications] will be enormous, since the highest form of sentient
beings on the planet will no longer be human."
Jeff Hawkins
Cofounder of Numenta, in Menlo Park, California, a company developing a
computer memory system based on the human neocortex. Also founded Palm
Computing, Handspring, and the Redwood Center for Theoretical Neuroscience.
"If you define the singularity as a point in time when intelligent
machines are designing intelligent machines in such a way that machines get
extremely intelligent in a short period of time an exponential increase in
intelligence then it will never happen. Intelligence is largely defined by
experience and training, not just by brain size or algorithms."
"Machines will understand the world using the same methods humans do; they
will be creative. Some will be self-aware, they will communicate via
language, and humans will recognize that machines have these qualities."
"I don't like the term 'singularity' when applied to technology. A
singularity is a state where physical laws no longer apply because some
value or metric goes to infinity, such as the curvature of space-time at the
center of a black hole. No one can predict what happens at a singularity."
"Exponential growth requires the exponential consumption of resources
(matter, energy, and time), and there are always limits to this. Why should
we think intelligent machines would be different? We will build machines
that are more 'intelligent' than humans, and this might happen quickly, but
there will be no singularity, no runaway growth in intelligence."
John Casti
Senior Research Scholar, the
International Institute for Applied Systems Analysis, in Laxenburg, Austria
and cofounder of the Kenos Circle, a Vienna-based society for exploration of
the future. Builds computer simulations of complex human systems. Author of
popular books about science.
"I think it's scientifically and
philosophically on sound footing. The only real issue for me is the time
frame over which the singularity will unfold. [The singularity represents]
the end of the supremacy of Homo sapiens as the dominant species on planet
Earth. At that point a new species appears, and humans and machines will go
their separate ways."
T.J. Rodgers
Founder
and CEO of Cypress Semiconductor Corporation, in San Jose, Californa. Owner
of the Clos de la Tech winery and vineyards, in California.
"I don't
believe in technological singularities. It's like extraterrestrial life if
it were there, we would have seen it by now."
"I don't believe in the
good old days. We live longer and better than our predecessors did and
that trend will continue in the future. We will also be freer, more well
educated and even smarter in the future but exponentially so, not as a
result of some singularity."
Eric Hahn
Serial
entrepreneur and early-stage investor who founded Collabra Software (sold to
Netscape) and Lookout Software (sold to Microsoft) and backed Red Hat,
Loudcloud, and Zimbra. CTO of Netscape during the browser wars.
"I
think that machine intelligence is one of the most exciting remaining 'great
problems' left in computer science. For all its promise, however, it pales
compared with the advances we could make in the next few decades in
improving the health and education of the existing human intelligences
already on the planet."
"I'm not worried about The Matrix or The Day
the Earth Stood Still. But I do hope the new intelligence doesn't run
Windows."
Gordon Bell
Principal researcher at
Microsoft Research, Silicon Valley. Led the development of or helped design
a long list of time-share computers and minicomputers at Digital Equipment
Corporation.
"Singularity is that point in time when computing is
able to know all human and natural-systems knowledge and exceed it in
problem-solving capability with the diminished need for humankind as we know
it. I basically support the notion, but I have trouble seeing the specific
transitions or break points that let the exponential take over and move to
the next transition."
Steven Pinker
Professor
of psychology at Harvard; previously taught in the department of Brain and
Cognitive Sciences at MIT, with much of his research addressing language
development. Writes best sellers about the way the brain works.
"There is not the slightest reason to believe in a coming singularity. The
fact that you can visualize a future in your imagination is not evidence
that it is likely or even possible. Look at domed cities, jet-pack
commuting, underwater cities, mile-high buildings, and nuclear-powered
automobiles all staples of futuristic fantasies when I was a child that
have never arrived. Sheer processing power is not a pixie dust that
magically solves all your problems."
Gordon E. Moore
Cofounder and chairman emeritus of Intel Corporation, cofounder of
Fairchild Semiconductor, winner of the 2008 IEEE Medal of Honor, chairman of
the board of the Gordon and Betty Moore Foundation. The Moore of Moore's
Law.
"I am a skeptic. I don't believe this kind of thing is likely to
happen, at least for a long time. And I don't know why I feel that way. The
development of humans, what evolution has come up with, involves a lot more
than just the intellectual capability. You can manipulate your fingers and
other parts of your body. I don't see how machines are going to overcome
that overall gap, to reach that level of complexity, even if we get them so
they're intellectually more capable than humans."
Jim
Fruchterman
Founder and CEO of the Benetech Initiative, in
Palo Alto, one of the first companies to focus on social entrepreneurship.
Former rocket scientist and optical-character-recognition pioneer. Winner of
a 2006 MacArthur Fellowship.
"I believe the singularity theory is
plausible in that there will be a major shift in the rate of technology
change."
"I think that futurists are much more successful in
projecting simple measures of progress (such as Moore's Law) than they are
in projecting changes in human society and experience."
Esther Dyson
Commentator and evangelist for emerging
technologies, investor and board member for start-ups.
"The
singularity I'm interested in will come from biology rather than machines."
AR Thanks, IEEE, for a great
report. This was the theme in my 1996 novel
Lifeball, where I called the
singularity syzygy. As for all the consciousness stuff, see my 2004 book
Mindworlds. And as for my
reactions to Kurzweil singularitarianism, see my 2010 book
G.O.D. Is Great.
|