Algorithmic bias was born 40 years ago



Biases and discrimination are issues that you might think distinctly human. Computers, in theory, should be immune to bias. After all, they apply the same rules no matter what. Sadly, however, these rules are as prone to discriminatory outcomes as we are.

This is algorithmic bias. And it’s not a new problem.

In fact, the issue of machine discrimination first reared its head a whole four decades ago. Here’s the tale of how the first (known) algorithmic bias was born.


An algorithm from 40 years ago

40 years ago, Dr Geoffrey Franglen was part of the admissions process for St George’s Hospital Medical School. As one of the assessors, he would pour through some 2000 – 2500 student applications each year. This was a tedious and long-winded task. One that Dr Franglen sought to automate.

And so, he wrote an algorithm that could take on this initial screening process. He designed the algorithm to mimic the processes that he and his fellow assessors used. Come 1979, it was ready. That year, both the algorithm and the human assessors screened the annual applications. The result saw the algorithm and humans agree in 90-95% of cases.

By 1982, the algorithm processed all initial applications to St George’s Hospital Medical School. Along with being more efficient, it was believed that using the algorithm would mean fewer inconsistencies in the screening process. (And thus prove a fairer admission process.)  

Given that this is an article about algorithmic bias, you can probably guess that things didn’t quite go as planned.


Franglen’s monster

Before long, concerns about the diversity of successful applicants started surfacing. This prompted an investigation from the UK Commission for Racial Equality in December 1986.

Here, for the first documented time, algorithmic bias came to light. It was found that the algorithm included rules that placed weight on both place of birth and the applicant’s name. From this, the algorithm would classify applicants as either ‘Caucasian’ or ‘non-Caucasian’. Applicants landing in the latter category had the algorithm weighted against them.

Specifically, just having a ‘non-European name’ could cost an applicant 15 points of the score awarded by the algorithm. Women, meanwhile, lost an average of 3 points by virtue of their sex.

In other words, applicants became victims of algorithmic bias. The school was consequently found guilty of racial and sexual discrimination in their admission process.


Why did this happen?

Algorithmic bias comes from human bias, both unconscious and otherwise. In this case, the algorithm was embodying and perpetuating prejudice that already existed.

The assessment system ‘learnt’ based on human practice and historical trends in admissions. Don’t forget: humans agreed with its results 90-95% of the time. (Whether they were consciously aware of their bias or not.)

To put it another way, the algorithm has been designed to behave the way it did. From there, the algorithm would always operate with the bias coded into it. It couldn’t learn or change on its own. It didn’t have a moral code or understand what it was doing.


Algorithmic bias endures

Computers work by following rules; by using unemotional logic. But that doesn’t make them immune to baseless discrimination.

If the rules (or indeed, data used to create the rules) are flawed or biased, the output of the computer will reflect it.

As such, algorithmic bias is still a problem that we face today, 40 years on. With the rise of AI, artificial neural networks, and machine learning, it’s a problem that’s growing more complex to spot and fix.


Useful links

What is an algorithm? An ‘in a nutshell’ explanation

Do we need to worry about artificial stupidity?

The AI black box problem