Edge

Daniel Dennett and David Chalmers
 

Is Superintelligence Impossible?

An Edge conversation hosted by John Brockman in Brooklyn on April 10, 2019

A few highlights chosen by Andy Ross

DC The space of possible minds is absolutely vast. All the hundred billion human minds put together are just the tiniest corner of this space of possible minds. For the first time in the history of the planet, the computer has enabled some wholly new kinds of minds to come into existence.
Learning serves as a method for moving ahead in this space of possible minds. Start from a pretty simple mind and the capacity to learn, and it gets somewhere. Evolution is another such method.
Say we get to the first AI, which is that human level capacity for the various kinds of general intelligence. Within a year or two later, this AI program will be even greater than the human level for designing AIs. It will therefore be able to design an AI better than itself. This process is an amazing bootstrapping method for exploring that space of mind.
The AIs we can design may help design AIs far greater than those we can design. They'll go to a greater space, and then those will go to a greater space until, eventually, you can see there's probably vast advances in the space of possible minds. It's not going to happen soon, but we do have to think about it and we do have to worry about it.
Where do we as humans stand with respect to these AIs? Will they replace us or enhance us? Do we ourselves eventually become the AIs? Do we upload ourselves to the forefront of this expanding wave of superintelligence?
DD What worries me is that we will for the very best of reasons turn over our responsibility for making major decisions to AIs that are just very intelligent tools. When you start delegating major life decisions to smart tools, then this changes our human predicament in a very important way.
DC There are going to be many incentives to take the human out of the loop and to give these AIs the capacity to act on that advice directly and autonomously. Biological systems are going to eventually be slow and creaky compared to these new AIs. Autonomy is going to be very hard to avoid.
DD Autonomy is synonymous with free will, and I don't think we want to give AI complete autonomy because the nature of the technology has a certain invulnerability we don't have. You can back them up and put them back together again and make another copy on Monday, and if human beings were capable of being completely backed up and then brought back on Monday, that would change the nature of human interactions and human relations dramatically.
DC A system is autonomous when it has a wide variety of goals and has the power to achieve them. To advance autonomous AI, it will be systems that not only have goals but can achieve them. This is a much more limited form of autonomy. I'm not sure that consciousness would be required for this.
DD The difference comes out if you compare good old-fashioned AI with contemporary AI. At the moment with deep learning and all the rest, they do amazing things, but they haven't been formed into anything like an agent. If we let these things evolve and learn, then what we know right from the outset is that we will not be in control.
DC When you have machine learning, you're optimizing an objective function. A good machine learning system will eventually approximate that objective function better and better. Once these systems have autonomy, we have a responsibility as the creators of the AI to make sure our systems are maximizing the right objective function. The challenge of autonomous AI is finding a way to make sure our systems have the right goals and values.
DD We can have very intelligent systems that are not conscious in any interesting way, but they will seem conscious in some ways. It's a matter of whether they are capable of taking their own interstates as objects of scrutiny and doing that recursively and indefinitely. That's the big difference between human consciousness and animal sentience.
DC Deeply baked into our moral system as human beings is that an entity has moral status if and only if it is conscious. If a computer system doesn't have any consciousness, then it's basically a tool. If the systems are conscious, they're systems that we have to start caring about. So, if most AI systems eventually are conscious, then we can't simply use them as our tools.
Our minds are gradually migrating onto computational systems. We're eventually going to have the option of uploading our entire core onto computational systems. Maybe the uploaded system will no longer be conscious, or maybe it will just no longer be me.
DD Thanks to deep learning technologies, we can delegate to black boxes finding the patterns in all sorts of very large datasets. This means a diminution in the role of the individual conscious scientist. We're beginning to deal with distributed understanding, where no one person understands the results.
DC Biological evolution is now largely being supplanted by cultural evolution. At some point, cultural evolution may be supplanted by artificial design evolution. There are many ways for this to go wrong. We have to be careful.
 

AR See my book Mindworlds on the philosophies of Chalmers and Dennett.

 

Home page Sitemap