The main argument, as far as I understand it: Artificial intelligence may improve to levels far exceeding those of human intelligence. This may happen very fast, in an “intelligence explosion”. The consequences of such an event may be an “existential risk“, which may lead to extinction of humans. Therefore there is a need to think about safety when developing artificial intelligence.
Our own record shows that humans rule the planet because of our superior intelligence. In the same way, an Artificial Intelligence actor may rule the world after such an explosion. And humans may not be needed in such a world.
I agree that this might be a problem. However, I don’t see how anything Bostrom or any other human can say about the matter could hope to achieve anything.
By definition, a “superintelligence” is so much superior to human intelligence as human intelligence is compared to that of monkeys.
That means any speculation of a human author about what a superintelligence might want or be able to do is about as reliable as the thoughts of a monkey about a human’s motivations and abilities. That includes smart human authors, like Bostrom.
Bostrom also neglects the most obvious path to superintelligence, which is that humans just become dumber and dumber, so that a present state of the art Artificial Intelligence is already vastly superior to those stupid and lazy inhabitants of a “Pump Six” world.
I understand that there may be some risks in unleashing an “intelligence explosion”. That’s especially true because the people doing the experiment are humans, and as such, not very smart.
But, on balance, I would still rather have a future where computers help people solve problems than one entirely without computers. That’s because there are a lot of problems ahead, especially from global warming.
Sure, if we switch off all computers and shut down the Internet, the risk of an “intelligence explosion” will be avoided completely. If we don’t take such a radical step, it is only a question of time before it happens.
I would rather have it happen earlier than later. Humanity has not exactly a stellar record in managing the planet. It can’t get much worse if an Artificial Superintelligence takes over.
We might even solve global warming in time (though I already have figured that one out).