Do we need to worry about artificial stupidity?

Thanks to the sci-fi dystopias of pop culture, artificial intelligence has become a source of panic. Fear that we are on our way to creating the sentient robots that will overthrow, enslave or all out exterminate us, is common.  Rest assured, we are a long way from artificial intelligence sophisticated enough to overthrow us.

However, in the development of AI, we have started to create machines and programs that deliberately make mistakes. There are instances where artificial intelligence fails to perform its intended task. Then, there’s the current incapability of AI to understand emotions and ethics.

Which leads us to ask, do we need to worry about artificial stupidity instead?

The AI among us

AI-powered bots already interact with us daily. They’re in our homes, they’re replying to your emails, they’re in our video games. Today, we carry artificial intelligence around in our back pockets.

The hope for artificial intelligence is the enabling of vast leaps and bounds in human knowledge. AI will help us cure illness, take us deeper into space, and generally improve our quality of life.

But currently, we’re worrying about AI taking over. We fear that it will one day become capable of creating ever smarter AI, superseding us in every aspect of our lives. This fear comes not only from science fiction but from respected names in tech, from the late Stephen Hawking to Elon Musk.

But we might be worrying about the wrong thing.

Artificial stupidity

Some argue that this sci-fi fuelled horror is a risk with an extremely narrow possibility. There’s no denying that AI could (eventually) become far more intelligent than humans. But we don’t know, and can’t know, if that will benefit us, remain neutral, or be our downfall.

In short, as far as artificial intelligence is concerned, the future is a gamble.

Meanwhile, we have the current state of AI to worry about. Is a dystopian future the real cause for concern, or should we be worrying instead about artificial stupidity?

There are three different understandings of the term ‘artificial stupidity’.

  • AI that fails at its designed job
  • ‘Dumbed-down’ AI
  • AI inabilities

Each of these elements of artificial stupidity come with real dangers if handled poorly.

AI that fails

Artificial stupidity was originally used as a derogatory name for any AI that failed to perform its designed role. Fabio the Pepper robot is a famous example. This disastrous AI bot was designed to help UK supermarket shoppers. Instead, it only proved creepy and confusing — leading to “the sack”.

But consider what it will mean if specially designed AI that handles an important role malfunctions.  For example, a medical diagnosis AI gives an incorrect diagnosis or doesn’t recognise a serious problem. As a result, the patient could find their condition deteriorating.

What if a driverless car glitches? The potential resulting crash could claim many lives at once or cause serious injury. If our AI gets things wrong in important fields, it could be fatal to the humans involved.

But, with careful moderation and strict regulation, artificial stupidity isn’t as much of a problem.  AI decisions need a backup of human knowledge and understanding. This way, AI can boost our knowledge and support us.

Digital dumbing-down

Some artificial intelligence machines and programs are deliberately ‘dumbed-down’. This marks an entirely different take on the term artificial stupidity. By putting spelling errors in typed messages, not adhering to strict grammar and so on, AI seems less intelligent. These (fully intentional) errors are coded into the system with the goal of creating AI that appears human.

Artificial stupidity in this sense could be a hindrance to AI, or it could be the section of artificial ‘intelligence’ that makes it the most human. The problem is, bots are already used for online scams and cyber-attacks. For example, bot catfish attempts on dating apps that can result in blackmail. The better AI gets at fooling us into thinking it’s human, the harder it will be to guard against such attacks.

But human-like AI, if handled correctly, could also do a lot of good. A human-like bot could help improve the customer experience for automated customer service interactions. Or it could provide companionship for lonely people.

Artificial incapability

Artificial stupidity could also refer to the current limitations of AI. Machines currently have no common sense and no emotional understanding. They have a limited acquisition and maintenance of knowledge, and no indication of morals or ethics. Artificial stupidity, then, relates to the human understanding and moral code of which AI is incapable. It’ll blindly do whatever it is told (even to our detriment).

While the advancement of AI means that machines can simulate humans, it doesn’t mean that the machine has any real, deep level understanding of what it is producing. The inability of artificial intelligence to differentiate between good and bad, paired with how easily it can be misled, is perhaps the most concerning element of artificial stupidity.

This incapability of AI to differentiate between good and bad means that if an individual with ill intent tells an AI bot to do something, it will do so without question. This form of artificial stupidity makes AI a prime target for weaponisation. Bots are already used in morally dubious ways, such as hacking, spam and censorship. You need only look at Microsoft’s Tay so see just how easily so-called artificial intelligence can be misled.

But, this risk can again be mitigated through legislation and careful use of AI. By recognising the limitations of artificial intelligence, we reduce the risk of falling foul of it.

A lot to offer

Most of the AI that we use daily is useful. It helps us to optimise our time, remember important dates, and keeps us entertained. It stands to help us maximise our productivity at home and at work. And it even looks out for our health by increasing road safety and improving medical diagnosis and research. Used carefully and correctly, AI has much to offer.

We need to stop focusing on artificial intelligence superseding us in the distant future. If we really must worry about the development of AI, then our focus needs to be on the threat of artificial stupidity. We need to meet the robot revolution with legislation, care and a drop of cynicism.