Friend or foe? Five ethical questions raised by AI



What are the ethical implications of artificial intelligence? We know that we are facing an automated future. We know that AI is proving ever-more illimitable in its capabilities and applications. What we don’t know, however, is whether the technology will ultimately prove friend or foe.

AI holds the potential for great good. It could drive us to a post-work utopia in which our time is liberated from labour. Or, it could lead us down a darker path, one with increased poverty and social isolation.

So, what moral framework should guide us as we implement AI? Here are five ethical questions surrounding our AI-infused future.


How could AI affect our professional relationships and careers?

AI and automation reduce the amount of repetitive manual tasks an employee has to complete. It’s usually these kinds of mundane tasks that cause death by admin. They destroy morale and impact productivity.

So, if AI can alleviate the tasks we all hate, would it make us better colleagues? That’s one possibility. With AI handling the time-draining daily slog, we can spend more time working as a team on new ideas and innovation. We could challenge ourselves with more valuable or creative pursuits, and enjoy the resultant change of pace.

But there’s a flipside. Introducing AI to our professional lives could lead to us working in AI siloes, further separated from others. Rather than talk to people, we talk to their AI assistants. In this way, AI would act as a technological wedge between us and genuine human contact. We aren’t asking Bill from HR for help, we’re asking a robot to ask Bill for us.


What happens if AI does take over our jobs?

If AI takes our jobs, how we are going to spend our time? In the barbaric age, we sold our waking time just to be able to live. Today, we sell it to generate money.

So, when we no longer have to sell time to survive, AI could enable us to spend our lives socially. We could engage more with our communities and loved ones. We might spend more time furthering human knowledge. Or we might enjoy a proliferation of newly-created art, music and fiction. As technology continues to evolve, we may even learn new ways to contribute to human society.

But then again, should we even give AI the chance to take over? There are some roles where it could be unethical to let a machine take over at all. For example, a nurse, a soldier, a police officer or a judge. These jobs all require empathy, respect, and human judgement. Is AI alone capable of providing the humanity we need in these roles?

Then there’s the fact that work is often identity-forming. For many of us, it provides both structure and purpose. Without jobs, then, the impact on our mental health could prove detrimental.


How do we distribute wealth if we aren’t working?

This leads to the next ethical AI question: if AI takes our jobs, how do we support our families?

Our economic system requires us to trade our time and skills for compensation, most often in the form of an hourly wage. If we’re heading towards a post-work society, this is no longer going to be viable.We would lose our work trades, but not the need for the outcomes of working. (Namely food, shelter, security, etc.)

Precious few people will be stakeholders in the AI systems handling our old jobs. Following our current compensation system, this would mean that AI suppliers would get all the money.

The wealth gap is already widening. With AI implementation, leaving things as they are could make this worse, leading to more poverty. For ethical AI to be incorporated into our economy, then, we must find a way to fairly distribute wealth.


Will AI impact our humanity?

Machines and software are designed to capitalise on the way our brains work. They trigger the reward response in the human brain, making us more inclined to continue using them. Look at video games, at our addiction to our screens, at clickbait headlines and even fruit machines.  

So, what if this effect is to the detriment of our humanity? The issues surrounding social media and mental health come to mind as an example. We no longer interact with each other in the real world. Instead, we post digital propaganda about our lives and seek validation through our screens.

Plus, a large part of our humanity lies in sympathy and empathy — something which AI is incapable of. If we allow AI to make our decisions for us, how will this affect our prerogative to help people in times of crisis?

In contrast, careful AI use could nudge us towards more socially beneficial behaviour. With our tasks handled by AI, we would have more waking time to interact with and help each other. The question is, would we do so?


Do we need to worry about artificial stupidity?

AI is not clever, it just pretends to be. It’s easy to trick, it has no moral compass, and in some cases, it’s designed to be stupid. AI fails regularly, and so we must ask, do we need to worry about the ethical implications of artificial stupidity?

For instance, AI is notoriously prone to bias. It’s trained with past data, and any human bias within that data can then become regulated and replicated by the AI. So, if there’s an unconscious bias against a given race or gender in the data, the AI will also show bias against that group.

Then, there’s the more pressing issue of artificial stupidity. AI has no moral compass. If it is fooled, biased or instructed to act maliciously, it will do so without question. It will act as instructed, whether ethically or not.

For ethical AI implementation, then, we must protect, monitor and legislate its use.


AI: friend or foe?

There are no concrete answers when it comes to AI and the future. But by continually asking the right questions, we can at least start to think about the new frontier for ethics that AI presents.

It is true that we stand to gain enormously from implementing AI. It’s also true that there are many ethical pitfalls along the way.

So, the onus lies on us to be smart with the AI steps we take. We are the ones responsible for the security, efficiency and utility of AI. We need to seize the opportunity to collaborate more, contribute more, and create more. Ultimately, it’s up to us to make sure that our use of AI helps, and doesn’t hurt.


Note: we originally published this article here: https://dzone.com/articles/friend-or-foe-five-ethical-questions-raised-by-ai