Code-deep discrimination: combatting racial bias in algorithms



Racial bias permeates the technology around us. Subtle prejudice is woven into the machines and devices we use daily, causing our tech tools to display discrimination.

This rise of algorithmic bias means that tech not only behaves with bias, but also fuels human discrimination. And the proliferation of tech into every area of our lives means that racism seeps into more (seemingly impartial) products and processes each day. Worse still, the ongoing rise of AI-powered tools could see the problem grow exponentially. 

Most ‘racist’ tech tools are not discriminatory out of malice. More commonly, the bias stems from oversight during development and design. Here’s what you need to know about this code-deep discrimination. What can we do to combat racial bias in machines?


The problem

Racism in technology goes further back than you might expect. As far back as the 1950s-60s, emerging cameras and coloured film were designed to specifically make white skin look good. It took complaints from furniture and chocolate companies before the issues were addressed. (The cameras made their products look bad.)

Today’s cameras haven’t fared much better. For instance, we now have photo apps claiming to make you more attractive (‘hot’) by lightening your skin tone.

The rise of AI tools has only added to the issue. Facial recognition AI provides a stark example. People of colour have reported finding it easier to use facial recognition when they wear white masks. Indeed, studies show that facial recognition systems have higher error rates for people of colour. The biggest error rates occur for women of colour, at around 35%.

AI has a real difficulty identifying black skin. And with driverless cars on the way, this bias in tech could soon prove fatal.


There’s more to this issue

Racism in technology doesn’t stop at tools displaying colour partiality, either. Biased algorithms mean that many people end up in echo chambers, thanks to a phenomenon known as ‘filter bubbles’.

Filter bubbles occur because algorithms show you the content you enjoy and relate to. Over time, content that relates to people that are different from you, or hold opposing views, ends up filtered out of your news feed.

The result is a dilution of the experiences and representation of the world from another’s point of view. Consequently, your views can gradually become more radicalised due to the overconsumption of only one narrow viewpoint.

In other words, ‘racist’ algorithms are potentially aggravating discriminatory ideologies in humans.


Diverse hiring

Technology isn’t created in a vacuum. It’s fed by humans. If the humans behind our tech tools have a bias — conscious or otherwise, it can taint the technology.

More diverse hiring, then, might help to quell the racism of our tech tools. Indeed, the workforce within the tech industry is largely homogenous. Tech employees of colour, women and the older generation are all distinctly scarce. And yet technology created by this white, male majority holds ramifications for us all.

Our technology, at least in part, reflects this current lack of hiring diversity. By diversifying the people behind our tools, we would gain a wider insight during development. That is, having an eclectic team behind a product creates more opportunities to identify racial usability issues. For example, facial recognition devices developed by a racially diverse team would be less likely to specialise on white skin only.

Beyond any further benefit, the diversification of the tech industry workforce is a change that needs pursuing. But it’s not likely to combat code-deep discrimination on its own.


Addressing AI bias

AI is particularly prone to indulging in bias. It needs vast volumes of training data to learn from before it can work accurately. If then, that data is not diverse, how can the outcome be fair?

Biased training data is a major cause of ‘racist’ AI. An AI cannot identify a person of colour if its training never taught it to do so. Nor can it attribute ‘hotness’ to light skin tones without first being fed biased information. And nor does it know if the processes it is performing are racially neutral.

It is down to developers to ensure inclusion in the AI tools they create. A step to reducing AI bias, then, is to ensure that any and all training data is sufficiently diverse and representative of those the tool will impact.


Advocating transparency

AI tools can also learn too much. They might learn, for instance, to exclude applications or results based on keywords favoured by one race over another. Just as Amazon’s AI recruitment tool did.

Training data provides a springboard, but each interaction further shapes the output of an AI tool. As such, it’s important that we can understand the reasoning behind automated decisions. If we can’t, it becomes harder to detect underlying bias or discrimination. 

So, transparency is another key tool that could help us combat code-deep discrimination in technology. This means regularly analysing the reasoning behind the output of our tech tools. This will allow us to recognise and fix harmful learned ‘rules’.


Remember the limitations

Finally, combatting the code-deep discrimination in tech tools means remembering the limitations of these tools.

Many of the major issues of racism in technology come from an over-reliance on the tech. But, if we remember that these tools are not yet perfect, we can prevent much of the damage that a biased tool can do.

Our ‘racist’ tech tools don’t know any better. Tech tools have no moral compass, and they can’t notice their own biases. It’s the responsibility of the team behind the tech to provide that morality for the tool. This means that it’s important that tech teams continue to track, tweak and tune their creations.


Combatting racism with tech

Elsewhere, technology is alleviating racial bias. Take Uber, for instance. Uber has made it easier for people of colour to get lifts (albeit inadvertently).

It’s small, but not insignificant. That is, Uber alone isn’t going to combat all racism. But it does go to show how much of an impact racially inclusive technology can have on people’s lives.

Technology does have the power to help us combat racism. But first, we need to combat the racism in technology.


Please note: we originally published this article here: https://bdtechtalks.com/2019/08/06/combatting-racial-bias-in-algorithms/


Download