“Never trust anything that can think for itself if you can’t see where it keeps its brain”



In her second instalment of the Harry Potter series, The Chamber of Secrets, J.K. Rowling wrote: “Never trust anything that can think for itself if you can’t see where it keeps its brain.” In doing so, Rowling inadvertently coined an adage for the age of AI.

In the context of the novel, the quote refers to objects that have been bewitched. In the context of modern technology, however, we can apply Rowling’s warning to the artificially intelligent systems increasingly intertwined with our lives.

But this is no witch-hunt against AI. Rather, the lesson is about the importance of looking into the ‘brains’ of AI before we trust its decisions.


Indistinguishable from magic

In the magical world of Harry Potter, any device or object that can think for itself is potentially dangerous. It might be a sentient diary, or it might be an enchanted diadem – if you can’t see where it keeps its brains, be wary.

For us mere muggles, magically enhancing an object is not an option. Instead, we use technology to enhance the world around us. The problem is that millions of us have no idea how that technology works – we just know that it does.

For example, how many smartphone users have a solid understanding of the complex code that keeps their smartphone smart? How many people chatting to Alexa understand the intricate webhooks that help keep their life connected? When it comes right down to it, what percentage of the population can explain how the internet works on a technical level?

While this does not present a problem per se, it means that artificially intelligent systems are widely misunderstood. (And widely mistrusted.)

It was another British writer – Arthur C. Clarke – who wrote that any sufficiently advanced technology is indistinguishable from magic. For many, AI is one such sufficiently advanced technology.


Logic, not magic

Yes: AI is complex, powerful, perplexing. But it’s also highly logical. There is nothing “magical” about AI; nothing that is not the product of consistent, analytical, and data-driven computations.

If, then, we are to ‘never trust anything that can think for itself if you can’t see where it keeps its brain’, what does this mean for AI? What are we to make of technology that can make highly logical decisions, without letting us in on how exactly it’s done so? Does seeing and understanding AI’s ‘brain’ even matter with regards to trusting its output?

As it happens, it does. J.K. Rowling may not have known much about AI intricacies, but her quote is pertinent to the AI black box problem nonetheless.


The “AI black box”

The AI black box refers to the unknowability of omniscient algorithms. We can (simplistically) break AI-powered decisions into three stages: the input, the analysis, and the output. The first and final stages are visible to us – we see the data going in, we see the data coming out. What we don’t see is the complex workings in between.

And this grey area is where AI mistrust – for all its groundings in logic – is at its most valid. In a world where algorithmic bias has been a known problem for several decades, we need AI to show its workings. We need to see its brain.


Explainable AI

Explainable AI, or XAI, is a potential solution to the AI black box problem. It seeks to bring clarity to the AI decision-making process — so that a human could look at the criteria the AI has used, the applied logical process and the potential for error.

This would allow us to identify mistakes and prejudices. After all, AI is by no means infallible. Useful, yes. Welcome, yes. But failsafe? No more so than a human.


Never trust anything that can think for itself if you can’t see where it keeps its brain

J.K. Rowling may not be an AI authority, but her quote holds a lesson for our use of artificial intelligence.

AI can acquire bias, make mistakes, and disagree with other artificially intelligent systems. Successfully building XAI, then, would mean we could remove any layers of obscuration and shed light on precisely how AI attains its conclusions.

So, not only could we see where it keeps its brain, we could see how that brain ticks. And this can only be a positive for promoting trust in the AI systems shaping our futures.


Useful links

The AI black box problem

ELI5: explainable AI

Algorithmic bias was born 40 years ago


Download