3 reasons why the AI-pocalypse isn’t headed our way

As we continue to invite artificial intelligence into our lives, fear of it taking over jobs — or even the world — are bubbling under the surface. Science fiction and pop culture use robots and AI as villains. Even Elon Musk and the late Stephen Hawking have warned of AI taking over.

With all this input, is it any wonder that the AI-pocalypse is a concern even beyond the realms of fiction? Artificial intelligence is growing in scope and capability. It’s entering our lives from every angle, with smart speakers in our homes to AI assistants at work.

But are we really heading for an AI take over? Well, there’s evidence that suggests not. From AI fails to pure inability, here are three reasons why artificial intelligence won’t take over the world.

Reason one: it’s not as capable as you think

Despite the growing emergence of uses for artificial intelligence, AI is still in its infancy. Bots are still babies. In a way, that might seem scary.

AI already seems capable of so much, from understanding what we say to our smart speakers, to helping us with our health. If it can do all that as an infant, imagine what it will be capable of in the future. But that’s looking to be a long way off. The fact is, for every cool new success of AI, there are many failures behind it.

Take, for example, the famous case of Microsoft’s Tay, the AI chatbot that became racist and hateful due to copying internet trolls. Or every machine learning chatbot that’s ever misunderstood a basic message. Or what about Cloi? The LG smart speaker that failed to even respond during its unveiling at CES 2018.

Not to mention, AI is easy to fool. Remove one pixel from an image, and it thinks a horse is a frog. Mess with its training data or input, and you can teach it the wrong answers. Take Alexa, who can’t differentiate a TV reporter from someone in the room — resulting in a spike in dollhouse sales.

Reason two: its successes are restricted

Even where AI is succeeding, there are limitations. Artificial general intelligence — that is, the AI considered most human-like and linked to superintelligence — is still a long way off. 

Narrow artificial intelligence has seen the most success (though still isn’t working as well as we would like). These are the AI systems that only cover one or two core activities. Think ordering a pizza, checking bank accounts, booking appointments. It might be working well in these directed tasks, but no one has ever taken over the world by ordering pizza.

Plus, the most successful AI programs are the ones that are working alongside humans, not in place of them. Chatbots are a great example. Chatbot use in customer service is on the up. But chatbots aren’t handling the complex support we ask of businesses — they’re leaving that to the humans.

So, artificial superintelligence — the self-replicating AI overlord — is still little more than a science fiction pipe dream.

Reason three: it won’t think to do it

Arguably, much of the fear around the impending AI-pocalypse comes from ascribing human qualities to AI technology. For example, understanding, self-awareness and desire. But in reality, AI doesn’t want or need anything, any more than a teddy bear feels pain when a child drops it.

Artificial intelligence, no matter how human it seems, is not sentient. Its ‘intelligence’ — or ability to learn — is exactly as it says on the tin: artificial. It’s just masses of data and pattern recognition. As Rodney Brooks puts it: ‘we mistake the performance of machines for their competence.’ In other words, it can do the task it’s taught to do, but it doesn’t understand what it is doing.

In short, AI alone is not going to take over the world, because it simply wouldn’t think to. The thought won’t cross an artificial mind, so to speak.

No AI overlords

Machine intelligence isn’t going to surpass human intelligence any time soon. And even if it does, it’s important to remember that AI is a tool. The biggest danger of AI, then, is not the AI-pocalypse. It’s vulnerabilities to artificial stupidity, misuse, and cyber security flaws.

So, instead of worrying about AI taking over, perhaps we should be looking at ways to legislate —and be ready to control — the growing uses of artificial intelligence.

Note: we originally published this article here: https://datafloq.com/read/3-reasons-why-ai-pocalypse-isnt-headed-our-way/6030