What is the FAT machine learning model?
In your most recent deluge into the world of artificial intelligence, you’ll likely have come across terms like ‘black box’ or ‘XAI’. And, in exploring these, you might have seen ‘FAT ML’ or ‘FAT machine learning’.
No, we aren’t trying to tell you machine learning has indulged on too much cake. Rather, FAT is a model for machine learning that promotes ethical AI.
So, what exactly is the FAT machine learning model, and why is it important?
Quick recap: machine learning and ethics
Before exploring the FAT machine learning model, it’s worth remembering some key terminology.
For a start, machine learning. Machine learning is a subsection of artificial intelligence. It refers to a machine’s ability to ‘learn’ from data by recognising patterns, and so improve over time.
Then, there’s the AI black box. This is the problem of being unable to understand the processes AI uses to reach its answers. This is an issue within the study and creation of AI, and one that is prevalent in machine learning tools. (We don’t know exactly what they are learning.)
This leads to ethical problems because it’s easy for machine learning AI to pick up biases from data. (For example, Amazon’s sexist recruitment tool, Tay the racist bot.) If we don’t know how or why AI reaches a conclusion, we can’t know it’s the ‘right’ answer. And we can’t fix incorrect or biased processes, either.
FAT machine learning
Enter the FAT machine learning model. This is a model for machine learning that prioritises FAT: fairness, accountability, and transparency.
The fairness part of the FAT machine learning concerns itself with the output of the ML tool. That is, for fairness, the answers a machine learning algorithm gives must not fall foul of bias or discrimination.
Fairness ensures that the output of the machine does not have an unjust impact on end-users from any demographic.
The ‘A’ of the FAT machine learning model stands for accountability. This is the need for someone to be held accountable should a machine learning algorithm go wrong.
Accountability means that there’s someone responsible for the results of AI-fuelled decisions. It’s about being able to explain and control the outcome of such decisions. And, where harm occurs, that someone is legally responsible, too.
Accountability in the FAT machine learning model requires an explanation for the impact of an AI decision. Transparency, meanwhile, is about being able to see the reasoning behind the decision. (And the ability to then explain that reasoning.)
So, transparency is about the ability to see and explain two key things. First, exactly what the machine learning algorithm has learned. And second, how it uses what it’s learned to reach its final output. This is also known as explainable AI.
Why FAT machine learning is important
Machine learning algorithms have an increasing influence over major decisions. Ones with a heavy impact on individual lives. The technology might soon provide advice that decides whether you get that loan, that flat, that medicine.
The decisions that machine learning supports have a major impact on the safety, health and wellbeing of individuals. Without fairness, accountability and transparency, algorithmic authority would be a hard pill to swallow.
The FAT machine learning model is about guarding against bias. With it, it’ll be that little bit less scary to accept and embrace technology that could one day hold major, unbiased sway over our lives.
FAT machine learning is a method of addressing the black box problem in machine learning. And, by extension, the ethical issues surrounding machine learning and AI technology.
The FAT model in machine learning is something we must strive for. Without it, the road to widespread AI trust and acceptance might be too challenging.