Tesler’s theorem and the problem of defining AI
What is AI?
It’s one of those questions with no clear-set answer.
AI is, of course, short for artificial intelligence. It’s the ‘intelligence’ given to machines — to artificial things. Things that aren’t alive.
Okay. But what is intelligence? What counts as AI? What once was considered artificial intelligence is now something else. It’s a rule-based chatbot. It an expert system. It’s an autonomous vacuum cleaner. But it isn’t intelligent.
Herein lies the problem of defining AI — and that’s where Tesler’s theorem comes in.
Tesler’s theorem, as it is popularly known, states that:
“Artificial intelligence is whatever hasn’t been done yet.”
This definition of AI comes from the late Larry Tesler, a computer scientist that worked at Xerox PARC, Apple, Amazon and Yahoo! over the course of his career.
But Tesler’s theorem, or at least, the quote he actually said, focused more on defining intelligence than AI:
“Intelligence is whatever machines haven’t done yet.”
Tesler’s theorem succinctly highlights a major issue with defining AI. Namely, that AI, or ‘intelligence’ is an ever-changing goalpost.
The AI Effect
Tesler’s theorem relates to the AI effect. The AI effect is the phenomenon that sees technology once lauded at ‘artificial intelligence’ lose it’s shiny AI label. These tools — once considered intelligent — are now not AI at all.
Plenty of examples exist of the AI effect in action. It was once thought that a machine that could beat a grandmaster at chess embodies an AI. A program called Deep Blue achieved this feat in 1997 against chess grandmaster Garry Kasparov. Then the goalpost moved and Go became the game AI needed to beat. (And it did in 2016 when AlphaGo defeated Lee Sedol in four of five games.)
Or, consider chatbots. A program that could appear as though it was talking to you was considered intelligent. Then they weren’t, because the program didn’t understand the intent behind your message.
The point is, every time an artificial intelligence completes a new feat, that feat is no longer a benchmark for intelligence. Going back to Tesler’s theorem, it fits: AI is whatever machines haven’t done.
When it comes to defining AI, much of the discussion lies with what counts as intelligent. But intelligence itself is not an easy to measure metric. There are different types of intelligence. Does intelligence mean knowledge or understanding? Is knowledge in one industry (known as narrow AI) enough, or does an AI program need to have a vast range of knowledge across topics?
The Turing test is one example of attempting to pinpoint what counts as intelligence. Namely, a machine is intelligent if it can pass as human. But then, as with the Eugene Goostman saga, a machine could pass by pretending to be a person with limited intelligence. Does that still count as AI?
Perhaps we will have achieved AI if we create strong or general AI. That is, programs and machines that behave and think exactly as a human does. Or maybe we won’t have achieved AI until we have created artificial superintelligence. That is, machine intelligence that far exceeds human capability.
Then comes Tesler’s theorem. Intelligence is whatever machines haven’t done. So, whatever a machine has done is not intelligence. It’s clever coding. It’s rules. The machine has done it and so it no longer counts as ‘intelligence’.
Artificial intelligence is an umbrella term for a huge host of functions and abilities a computer could hold. It covers the ability of a computer to see (computer vision). The ability to learn (machine learning). It includes programs that identify fraud or spot email spam. It’s facial recognition, it’s deepfakes. It’s chatbots that can hold a nuanced conversation.
AI is a marketing term as much as it is a meaningful label. Once an AI program unlocks the sought-after ability, it gets a new name that better describes what it does. For example, an AI program that recognises human faces becomes facial recognition. An AI that identifies email spam is a spam filter.
Following Tesler’s theorem, if AI is what machines haven’t done yet, then a machine achieving AI is impossible. ‘AI’, then, is a moving goalpost.
Tesler’s theorem points out the problem with defining the term ‘AI’ because it shows that what counts as AI is always changing. It renders AI an eternal goal.
As such, future artificial intelligence will likely cover unreached machine uses that we haven’t thought of yet. Meanwhile, that which we view as AI functionality today will likely have found different, more suitable labels.
Perhaps we will one day pinpoint the goalpost of intelligence and stop it from moving. But until we agree what counts as intelligence, the definition of AI will keep changing.
AI is, as Larry Tesler said, whatever machines haven’t achieved yet.