Are AI ethics impossible?
What should AI be, who should it help, and how should we use it?
Artificial intelligence is technology that, until recently, populated science fiction. It would occasionally play the helpful sidekick, but all too often filled the role of villain. Naturally, then, as the technology has risen in reality, ethical concerns have risen in tandem.
The discussion of AI ethics yields discussions of transparency, of bias, and the need for legislation. They seek to define a set of principles that will guarantee AI development and use is morally just.
But ethics aren’t as black and white as the logic that informs technology. So, are AI ethics even possible?
Defining what’s ethical
Ethics are a set of moral principles that direct a person’s (or entity’s) actions. They outline what makes an action right or wrong, good or bad. The problem is, achieving any solid, undeniable idea of ethics is easier said than done.
As such, there are several different schools of thought when it comes to what makes something ethical. For instance, Consequentialism measures the ethics of an action based on its consequences. The ends justify the means.
Conversely, there’s Deontology and Kantianism, which place the ethical burden on the action itself. Here, we’re morally obligated to act in accordance with a certain set of principles, regardless of outcome.
There’s also moral scepticism. That is, the belief that moral knowledge is impossible. That you can’t know what is morally good or bad. But if that’s the case, does it mean AI ethics are an impossible goal?
What are AI ethics?
The question that’s puzzling many today is what it means for artificial intelligence (AI) to be ethical. AI itself is a tool, and a tool cannot be good or bad. But people can use it in ethical or unethical ways, and it can help or hinder these uses. So, when discussing AI ethics, you’re discussing what makes for ethical AI use.
In general, a few recurring principles have emerged worldwide in the pursuit of a set of ethics for AI.
Transparency refers to the ability to see and understand how an AI has reached the answers and decisions it outputs. It’s one of the most touted principles in AI ethics, as it relates to all those that follow. Without transparency, achieving other ethical principles would be much harder.
• Fairness and justice
Fairness and justice are AI ethics that deal with the outcome of AI. It’s often concerned with the prevention and mitigation of unwanted algorithmic bias. AI-generated outcomes must be fair, free from bias and discrimination.
• Responsibility and accountability
The need for someone to be responsible for ensuring the safety and fairness of AI use. And, should the application cause harm, for someone to be held accountable both legally and morally.
AI uses masses of data to learn and to work. When defining a set of AI ethics, then, the need to protect this data and preserve the privacy of those involved is a common consideration.
• Beneficence / non-maleficence
Beneficence in AI means that the technology should promote good. Others consider non-maleficence enough for ethical AI. This means that, at the very least, AI should do no harm. It calls for safety and security.
Achieving AI ethics
The problem is, just as there are different ways of looking at general ethics, there are differing opinions about how to apply ethics to AI. While there is this convergence on the principles of AI ethics, agreeing about what these principles mean — and how to achieve them — is a different kettle of fish.
Ethics are in the eye of the beholder. They’re informed by social, political and religious views, and so there’s no universal agreement on what is ‘right’ and ‘wrong’. As such, it’s arguably impossible to create an objective outline of AI ethics.
This is where moral scepticism comes into play. If it’s so difficult to agree on what makes AI ethical, it’s impossible to know if an action or use of AI is morally just. In other words, though we try, we can never be certain that our AI use is ethical.
A clear-cut set of AI ethics, then, are impossible.
The difficulty of universal agreement
People understand and interpret each of the principles of AI ethics differently. They disagree on why, and to what extent, each principle is important. What and who the principles pertain to is also up for debate.
• Transparency and privacy
How ‘private’ do things need to be? How much should be private, and how does transparency factor into that?
For example, an AI uses personal data to reach an output. Someone questions the decision, calling for transparency — and so risking the privacy of the data involved.
Who do we hold responsible for an AI’s decisions? The developer of the AI, the entity that used the AI, the company that offered the technology are all possible candidates. Plus, what are they accountable for?
• Fairness and justice
There are differing opinions about what is just and what makes something fair. Do you follow the ‘rules’ to the letter, or do you allow for extenuating circumstances?
What is a ‘good’ outcome? Who should the AI benefit? Some people might favour a utilitarian approach, aiming for ‘the greater good’. Others might consider motive and the morals of the action as the basis to decide if an AI is beneficent.
Alternatively, is non-maleficence enough? In which case, what counts as ‘harm’?
AI ethics: impossible but important
Having a set of rock-solid AI ethics that everyone unequivocally agrees with is impossible. The reason is that there’s no universal understanding of what is ethical.
But that doesn’t mean that anyone should abandon the push for ethical AI use. Doing so would be a folly. It would destroy trust and acceptance of the technology and could lead to harmful use of artificial intelligence.
Instead, AI ethics should act as a guideline. A set of principles that inform the way we design, use and incorporate AI into our daily lives. We need to aim for AI that supports ethical uses, and that can adapt to changing views on what it is to be good.