Can we trust robots? Can they trust us?
They say trust is a two-way street.
With the rise of artificial intelligence, automation and autonomous robots, the question of technological trust is a pressing one. Concerns surround ethics, job loss and data misuse. And the question ‘can we trust robots?’ is a common one, full of lively discussion.
But the question can also be reframed. From violence to derision, it seems that not everyone is playing nice with the autonomous machines out there. Our interactions with robots are often streaked with malice.
So, can we trust robots? And can they trust us?
We can’t trust robots
Maybe it’s due to all the killer robot science fiction. Or maybe it’s all the negative press surrounding the future of work that has us pondering ‘can we trust robots?’ Either way, human mistrust of machines is rife. (And often justified.)
For instance, there’s the concern of algorithmic bias. If an AI gets a biased training data set, or someone feeds biased rules into an automation system, you get biased outcomes. And the robots aren’t going to notice that they are perpetuating bias, either. If a robot’s response or behaviour is so easy to corrupt, it’s hard to trust that it is fair.
Other ethical concerns contribute to a general distrust of robotics and AI. How can we be sure of the security and privacy of the data it uses? Is trusting these robots going to mean mass job loss? As robots become more capable of more things, not trusting them could be a defence mechanism against an uncertain future.
We can trust robots
For all the issues with robots, there are also reasons the answer to the ‘can we trust robots’ question is yes.
We can trust that robots and AI will only and always do as programmed. That is, what humans have told or taught them to do. It’s not always easy to remember that as lifelike as robots seem, they are still tools. They’re under human control; it’s up to us how we use them.
Another reason to trust robots is that they don’t have a sense of ‘want’ or desire. AI programs don’t think in the same way that humans do. They can identify trends and patterns, make decisions based on data, and carry out basic tasks. But they don’t have the same drive to innovate or solve new problems.
Can robots trust us?
What about the other side of the two-way trust issue, though? Can robots trust humans?
The short answer is no, simply because they don’t have the capacity to feel trust. They don’t comprehend trust or understand that you’re ‘hurting’ them.
But if robots did have a sense of trust, humans haven’t given them much reason to trust us.
Maybe it’s not the robots we need to trust, it’s the humans in charge of them.
Cruelty to bots
Robots have become more common in daily life, making the question ‘can we trust robots’ all the more pressing. But while the jury is out, the bots already out there haven’t all had a welcoming experience.
In 2015, a hitchhiking robot known as Hitchbot met an untimely end on the streets of Philadelphia. The robot was designed as endearingly as possible, sporting pool noodle limbs and the ability to smile and wink.
And for a while, people responded well, enjoying and taking part in the bot’s hitchhiking journey. Until the child-like bot was found with its arms and legs ripped off and its head missing.
Many of Hitchbot’s following were saddened by the demise of the robot. But the fact remains that Hitchbot couldn’t trust everyone to be as friendly as the majority.
• The shunned security bots
In 2017, a security robot in San Francisco suffered more than one instance of abuse. The robot, known as K9, suffered repeated attacks before becoming banned from patrolling.
These attacks included being knocked over, being doused in barbecue sauce, being covered with a tarp and even having faeces smeared over it.
In Silicon Valley, another security bot (K5) was punched to the ground in a car park.
• Abusing voice assistants
It’s not just physical robots that have met abuse at the hands of humans. Voice assistants like Alexa and Siri have programmed responses to abusive input. And worse, the responses have been labelled as perpetuating sexist tropes.
It seems that as it stands, robots can’t (or shouldn’t) trust us. If, that is, they had any concept of trust.
Can we trust robots?
The problem with asking ‘can we trust robots’ — and indeed, ‘can robots trust us’ — is that it over-humanises the machines. It supposes that robots are capable of trust and that they need to trust in the first place.
Perhaps the violence toward robots stems from our lack of trust for the devices. Or perhaps our in-built anthropomorphism is what triggers the emotional response to machines. Either way, our trust issues are misplaced.
‘Trust’ isn’t a bot concern. And yet for all the need to limit our emotional responses to robots, we still need to train them to recognise emotions. Rather than focusing on trust, then, we would do better to focus on mutual understanding.