From the article: "Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips." In the machine's defense, it was just doing its job.
Critics of Bostrom's theory say that it vastly underestimates the challenges of creating AI. Yes, these hypotheticals could be real problems, but we are nowhere near creating intelligence like this. This is the kind of problem we can solve in 100 years, when it could actually be a problem to solve.
People in the pro-AI camp think that AI can fundamentally change the world for good. It’ll be a world where we conquer death and more. There’s nothing that AI couldn’t help us solve. At one end of the spectrum AI can radically transform humanity for our benefit.
At the other end of the spectrum, AI could end humanity in ways we'd never expect. So, there's always that.
Bostrom’s fear, in simplest form, is that humanity will unexpectedly become outmatched by a smarter competitor https://t.co/KQiKSsIdAS