Current AI capabilities: impressive, but exaggerated
Ever since the inception of artificial intelligence, there have been exaggerated claims about AI capabilities. In recent years, the excitement about AI has grown. AI-powered technologies are frequently in the news. They’re entering our homes and integrating into our lives.
And with this growing AI excitement has come a growing amount of AI misinformation.
So, from hype to snake oil, exuberance to misinformation, what are the exaggerated claims surrounding AI, and the realities behind them?
What can AI do, and where can you find it?
Artificial intelligence is an umbrella term for a wide range of technological capabilities. For example, artificial intelligence technologies include machine learning, computer vision, natural language processing, and hyperautomation — to name a few. These tools, in turn, are applicable in multiple industries and use cases.
In other words, AI has many impressive capabilities.
Artificial intelligence allows machines to extrapolate and ‘learn’ from data input, recognising any patterns and anomalies. This functionality then allows AI-powered tools to perform tasks such as making predictions, profiling users based on behaviours, and personalising user experiences.
AI is also behind the computer vision tools entering the market. These tools allow computers to identify objects in images, or pick out faces from a crowd with facial recognition, for instance. Along a similar vein, AI also helps to power the creation of new images and videos. (Some of which are known as deepfakes.)
Artificial intelligence also sits behind the technology we have that can ‘understand’ us when we speak (voice recognition) or type (via NLP-powered chatbots).
AI capabilities: exaggerated claims
So, what are some of the exaggerated claims about these AI capabilities?
There is a wide range of diagnostic AI tools being trained and implemented in the healthcare sector. There’s hope that the technology will ease the strain on healthcare professionals and assist with finding difficult-to-spot signs of illness.
Frequently, however, these AIs are reported as ‘outperforming’ their human counterparts, diagnosing things like breast cancer or heart disease with higher accuracy than human doctors. The claims soon spiral into AI taking over the jobs of doctors.
Another advancement in AI capabilities often in the news comes in the form of self-driving cars. Claims are that autonomous vehicles are coming soon.
One example of exaggerated AI claims in the world of chatbots comes from an article that went viral, detailing how two AIs communicated and started to ‘make their own language’ which humans couldn’t understand.
Then there’s Eugene Goostman — a chatbot that’s claimed to be so good at communicating, it passed the Turing test. (Or so it’s argued.) The Turing test is widely considered a yardstick for measuring computer ‘intelligence’.
AI capabilities: the (still impressive) reality
We know that these claims are exaggerated. Understanding how can help to identify other such exaggerations. So, for each of the above examples, what’s the reality?
In healthcare, artificial intelligence is starting to enter the fray here and there as an assistant, or an idea for the future. With pattern detection, it can scan for irregularities — which is what allows it to spot things like cancer in patient scans.
Even should AI become useful for diagnoses, (which it is slowly on its way to doing) it does not mean it’s suitable for any other part of a doctor’s job. Not even, that is, diagnosing anything other than the one thing it’s been trained to spot. Artificial intelligence is only learning to become a helpful tool for healthcare professionals. It’s not going to be replacing doctors.
Autonomous cars are making their way into possible reality — but they’re nowhere near the level they’re purported to be.
Before autonomous cars can become mainstream, artificial intelligence — specifically computer vision — needs to become much more robust and much harder to fool. (It needs to be able to ‘see’ the roads, the signs, the hazards.)
There are also ethical discussions that stand in the way. (In events where crashes are inevitable, should the AI prioritise saving the life of person X or Y, for instance.)
These days, chatbots are indeed capable of holding more human-like conversations. In some cases — where the range of likely responses is very low — they can even come across as just as effective as a human.
But chatbots, and the AI capabilities that power them, are not yet at a point where they can hold completely human-like conversations. They’re still limited to narrow fields of conversation topics. They still need human support.
The reality of Eugene Goostman (that is, the argument around the bot’s Turing test ‘pass’) serves as a good example. Eugene Goostman passed the Turing test by explaining away conversational oddities. As such, it’s argued that the bot displays artificial stupidity, rather than intelligence.
The cause of the claims
With the true AI capabilities known and growing, why exactly are things getting blown out of proportion?
With the advent of AI comes hype and excitement, a sense of amazement at what the technology can — and supposedly promises to — do. In turn, this attracts investors. It’s popular, it’s exciting, it’s the next big thing, after all.
The problem is that some companies sell what is essentially AI snake oil. That is, they claim to be an AI company to attract investors. In reality, they either don’t use AI-powered technology, or its use is limited or irrelevant to the company’s output. (Whether knowingly deceptive or not.)
Fuelling the fire of AI misinformation is opportunistic journalism. Again, with artificial intelligence capturing the imaginations of many, writing about it can earn clicks. Exaggerating the state of AI to be scary, or surprisingly advanced, makes for a more interesting and enticing story. As such, many of the alleged advances in AI are claims based on flimsy evidence.
The problem of exaggerated AI capabilities
Exaggerating AI capabilities is not a new phenomenon, but it is a problematic one. It sets unrealistic expectations of the technology. It sets users and investors alike up for disappointment. And, of course, it spreads unnecessary fear.
All this misinformation works to hinder the development and progress of AI technology.
In fact, this hindrance has happened before, in a phenomenon known as the AI winter. In an AI winter, the exaggerated claims of AI capabilities lead to disillusionment when the technology cannot live up to overshot expectations. The result is a lack of belief in what AI can do, which leads to reduced interest and so reduced funding.
When it comes to AI, there’s no need to overegg the pudding. Artificial intelligence technologies are expanding and improving. It’s capable of all sorts of impressive things, and there’s much to be optimistic about.
Artificial intelligence is exciting. It’s growing. It’s improving. But beware of the hype around AI — the exaggerated claims of AI capabilities. Falling too far down the rabbit hole only hurts the future of the technology.