We Use Cookies

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with this.

See our cookie policy.

Will the AI singularity happen? Four arguments against it

The AI singularity is a potential future in which we see technology expand beyond human control. It promises unfathomable changes to life. And most predictions surrounding it are less than positive.

There are plenty of works of fiction warning us against the singularity, the future of robots and of AI. But the singularity might not be as much of an impending threat as it sounds.

So, will the singularity happen? If any of these four arguments are to be believed, the answer is no.


1. Diminishing returns

The singularity predicts that technology growth will accelerate out of our control as it learns to improve itself. (Particularly in the case of machine intelligence.) But this ignores the role of diminishing returns.

This is where, as technology improves, it takes more and more input to achieve the same level of progress. We've picked the low hanging fruit. Now we need to put in more effort and input to continue to improve.

Rather than a runaway machine intelligence train that sparks the singularity, we're looking at an uphill climb. One that's controlled by us the whole time.

So, will the singularity happen? No, because diminishing returns means that it doesn't matter how many times a future AI improves itself. The scale of each improvement won't be enough to spark the singularity.


2. Anthropocentric

The singularity puts human intelligence on a pedestal. That is, it views human intelligence as a special ‘tipping point'. Once the machines reach it, we will face the singularity. The anthropocentric argument criticises this logic.

Anthropocentrism is the belief that humans are the most important entity in the universe. There have been plenty of instances where an anthropocentric belief has proved flawed. For instance, the sun doesn't revolve around us, we evolved from apes — we aren't higher beings. So why would our intelligence be special?

The anthropocentric argument points out that we have no real reason to assume that human intelligence is some ultimate and unchallengeable level. Even if we do accept that human-level intelligence is special, it doesn't mean that it's the tipping point for the singularity.

Matching the abilities of a human brain isn't necessarily a gateway for machines to become all-knowing gods. Their intelligence won't suddenly surge beyond measure once they reach human levels.

So, will the singularity happen? No, because there's no reason to think that human intelligence is a special tipping point for AI.


3. Limits of intelligence

Or, we can take it the other way. That is, humans are special and there's only so far intelligence can go. (I.e. the level of human intelligence.)

The ‘limits of intelligence' argument recognises that there are many limits within the universe. For instance, you cannot accelerate past the speed of light. Nor can you sustain more than 100-250 stable relationships with other humans.

So, then, advancements in AI may hit a fundamental limit of intelligence. This is another argument surrounding the ‘will the singularity happen' question that suggests the answer is no.

Intelligence is a complex phenomenon that may well be walled. Human intelligence is likely not at (or close to) this potential wall. However, there's no real evidence to suggest that the limit of intelligence is so stupendously far past human ability that a singularity is possible.


4. Fast-thinking dog

Yes, machines can perform calculations much faster than humans. And yes, they're getting ever-quicker with ever-more complex computations.

Cue the fast-thinking dog argument: just because machines are faster, it doesn't mean they're smarter. Speed does not equate to intelligence.

This argument points out that intelligence depends on many factors, of which speed is just one. Say, for example, you could make a dog's brain process information more quickly. That faster processing power doesn't suddenly mean that the dog can play a game of chess, or compose a symphony, or weigh up an ethical dilemma.

Will the singularity happen? No, because having a speed or size advantage doesn't mean machines are more intelligent. (Or able to trigger the singularity.)


Will the singularity happen?

These are just a few of the arguments that suggest the singularity isn't on its way. Despite all the alarming headlines, and despite the creeping prevalence of AI in our daily lives, there remains no palpable singularity threat.

So, will the singularity happen? It's unlikely.

Ready to automate your business processes?

Ease your process admin with a free trial of ThinkAutomation

Download Free Trial