Blurred bot lines and the phenomenon of ‘aiporia’

Are you a bot?

This is a question that we’ve found ourselves asking more and more of late. As we increasingly interact with AI, it’s becoming harder to tell the bot from the human.

A by-product of these blurring bot lines is the phenomenon of ‘aiporia’. 

But what is aiporia, and what does it mean for AI going forward?

Blurring bot lines

In recent years we’ve seen vast advancement in the field of artificial intelligence. It’s still got a way to go before it reaches a human level of flexibility. But, in many areas, AI is now doing a good job of pretending to be human.

Take Google Duplex, for example. Complete with vocal hesitations and minor exclamations, the phone call AI is almost indiscernible from humans.

In jobs, meanwhile, blurred bot lines mean that more responsibility than ever before is being handed over to bots. Indeed, artificial intelligence is handling all sorts of tasks, from healthcare to financial advice. 

And the lines between bot and human are only set to grow hazier. It doesn’t matter if it’s behind the scenes in tasks and jobs, or upfront in customer service and daily interactions. Discerning between bot and human activity is only getting more difficult.

These blurring bot lines have led to an interesting side effect: aiporia.


Coined by Byron Reese, ‘aiporia’ refers to feelings of uncertainty about whether you are dealing with a human or an AI. It’s the sense of puzzlement regarding who — or, indeed, what — we are interacting with at any given time. 

This confusion is further agitated when you consider that it’s often both. You may start a conversation talking to AI, then swap to a human. All without a single disruptive or jarring message. Without an explicit signal that we’re dealing with a bot, there will simply be no way to tell.

Aiporia will have two ways to manifest in the coming years. Either, it’s a gradual, growing uncertainty as AI use becomes more widespread. Or, people don’t realise they’re using AI. When they find out, the sense of deception damages relationships and causes future distrust and aiporia.

In short, allowing bot lines to blur always leads to feelings of aiporia.

Aiporia and widespread acceptance

But is it ethical to allow this uncertainty to perpetuate? On the one hand, there’s no harm to bots serving us if a human would behave the same way. In fact, for wider AI acceptance, the blurring of bot lines might seem a good thing. If people don’t notice they’re talking to bots, there’s less of a barrier to accepting them. Nothing has changed.

Then again, allowing the risk of aiporia is a form of deception. That, in itself, is a form of ethical damage. Plus, most people are already questioning whether they’re speaking to a bot or a human.

People don’t like the feelings of aiporia, or of deception. The backlash against Google Duplex serves as evidence. The mere chance of AI tricking us creates distrust and displeasure. So, allowing aiporia to manifest could hinder widespread AI acceptance.

Impact on jobs

Aiporia also holds implications for the job market. AI advancements are meaning that our perceptions of which job is better suited to which agent (AI or human) are also blurring.

This means that blurred bot lines are feeding automation anxiety. That is, human team members are starting to worry that bots are set to supplant them. The result is a stressed workplace environment. Extra pressure is now on leaders to find the balance when managing a mixed workforce of bots and humans.

So, aiporia isn’t just making whether you’re dealing with AI or a human uncertain, but which of them you should be dealing with.

Moving forward: redrawing the lines

Combatting aiporia means redrawing the blurred bot lines.

So, this means adopting a transparent approach to our future AI use. In other words, making sure it’s obvious when and how AI is involved. The simplest way to achieve this is for any AI-powered experiences to start with an explicit disclosure stating that it’s a bot.

As for job management, aiporia creates a need to find balance. Even when operating at an advanced level, artificial intelligence is still a tool — and tools need people to use them.

So, it’s time to redraw the blurred bot lines. Place emphasis on the human traits AI is incapable of replicating. (Such as empathy, emotion and deeper understanding.) For all AI’s convenience, people still crave human contact, for example. It takes humans to build trust and relationships.

Integration over aiporia

The future is one integrated with AI. Supported by it, not deceived or ruled.

The phenomenon of aiporia is a sign of the progress AI advancements have made. It’s up to us now to determine how to handle this by-product of artificial intelligence technology. With transparency and balance, or with the fog of uncertainty and aiporia?

Note: we originally published this article here: