Should we be worrying about artificial intelligence? | The Tylt
Elon Musk is warning artificial intelligence (A.I.) will end humanity. He says the government needs to regulate A.I. and implement preventative measures so our worst fears don't become reality—otherwise, we really might see Skynet come to life. But other industry leaders, like Mark Zuckerberg, say Musk is making a big deal over nothing. We shouldn't worry about worst case scenarios when the technology isn't even there. What do you think? 🤖
Should we be worrying about artificial intelligence?
Elon Musk, Bill Gates and Stephen Hawking are warning A.I. might lead to the extinction of humanity if we're not careful. Right now, the A.I. we see in the real world is weak A.I. That means it applies artificial intelligence to accomplish one specific task. A.I. that can identify whether something is a hot dog or not would be unable to translate languages. These bots are specifically designed for one task.
The real fear is with strong A.I. In contrast to weak A.I., strong A.I. would be able to apply artificial intelligence to solve any problem. This means we'll soon be creating intelligent machines smarter and more capable than humans. It's entirely new territory that humans have never encountered before. Musk says creating strong A.I. is essentially "summoning the demon." We're creating something that we won't be able to control. If something goes wrong, there's no guarantee that humans will end up on the winning side.
Here's how Eliezer Yudkowsky, a leading researcher in A.I., presents the problem:
“If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes. Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.
“Only it won’t actually happen like that. It’s impossible for me to predict exactly how we’d lose, because the A.I. will be smarter than I am. When you’re building something smarter than you, you have to get it right on the first try.”
Others say there's no reason to fear A.I. right now. The level of sophistication in today's A.I. doesn't remotely come close to the level of A.I. that Musk, Gates and Hawking are worrying about. Facebook founder Mark Zuckerberg says all the scaremongering is going to impede the pace of progress. Here's how he puts it:
“If we slow down progress in deference to unfounded concerns, we stand in the way of real gains.” He compared A.I. jitters to early fears about airplanes, noting, “We didn’t rush to put rules in place about how airplanes should work before we figured out how they’d fly in the first place.”
A.I. has the potential to fundamentally change the world for good. Alphabet CEO and Google co-founder Larry Page thinks A.I. will make human lives better. Machines are just machines, they're not inherently good or bad. At the end of the day, A.I. will just be a tool that humans use. Whether it does good or bad in the world depends on what we decide to do with it—not whether an A.I. goes rogue or not.
Google executives say Larry Page’s view on A.I. is shaped by his frustration about how many systems are sub-optimal—from systems that book trips to systems that price crops. He believes that A.I. will improve people’s lives and has said that, when human needs are more easily met, people will “have more time with their family or to pursue their own interests.” Especially when a robot throws them out of work.