AI for children: the risks and the rights
What effect is artificial intelligence having on children? We typically view AI consequences through our working-age adult lens. And, as a result, AI conversation is dominated by adult-focused topics like job disruption, workplace evolution, and industry advancements.
This too-narrow focus overlooks a large number of those affected by AI: children. For all our obsession with AI, we seldom stop to query how it will affect the young. But that doesn’t mean they don’t need protecting.
When it comes to AI for children, a distinct strategy and set of ethical guidelines is required. But what exactly are the risks of AI for children — and which rights need protecting most?
Children are growing up with AI around them. And yet, they’re only mentioned in passing (if at all) in most AI legislation.
The simple truth is that children are already impacted by AI. Already, children can interact with Alexa or Google Assistant, asking the questions that their parents won’t answer. Already, AI-powered algorithms can have a say on whether a child is at risk, or decide what video they should watch next. As time goes on, AI will likely have more sway in what healthcare they can get and what educational opportunities they’re offered.
And these risks aren’t lost on the adults overseeing the AI-child relationship. For example, parents must quickly stop a smart speaker that misunderstands from responding with inappropriate content. Policymakers must find ways to protect children’s data. (Going so far as to ban AI toys in Germany.)
The notion of child AI protection is key. Children aren’t the same as adults, or as each other. They’re still developing, and they have their full future ahead of them. The way they interact with the world, then, has long-lasting consequences for their growth and future.
So, what rights need to be considered and protected when it comes to AI for children?
- Right to privacy (including the protection of children’s personal data)
Every time a child interacts with a digital service, their data profile becomes more fleshed out. And these data profiles are precisely how AI algorithms make decisions. How, then, can we ensure the privacy and protection of children’s data as it’s collected by AI companies and used to train AI?
- Right to protection against discrimination, abuse, and exploitation
With AI for children involved in decisions that impact their lives, it’s imperative that we ensure the system is free from bias. Algorithmic bias can cause racial, sexual and age discrimination. Without the right safeguards in place, this could mean that children get impacted by unfair systems.
- Right to healthcare, education, and information
How can AI guard these rights? If AI for children involves making decisions that affect these core needs, could it ensure that children get the best opportunities?
- Right to express views in all matters affecting them (including AI)
Children should be educated about AI and how it affects them. It’s important to teach an understanding of the systems that are set to impact children’s lives so heavily.
- Right to complain formally and legally when rights have been breached
AI for children needs to have someone accountable — and a way for children and their families to address when their rights are breached.
The 2020 exam debacle in the UK is a great example of how AI and algorithms can and do impact children negatively. With COVID-19 rendering exams impossible, the grades children received were determined by an algorithm. One that, sadly, very much missed the mark, penalising children from poorer performing schools.
- Data security
A commonly discussed risk of AI is that of data security. When it comes to AI for children, this is arguably even more pressing. Children do not necessarily know the impact of sharing their data, or how companies may use it. So, how can we protect the privacy of data that’s used and collected by AI for children?
- Future risk
As jobs shift and change under the influence of artificial intelligence technologies, the skills needed will shift too. Children will grow up into a world with a markedly different job landscape — and so they need to learn now the skills that will help them succeed.
- Developmental risks
Another risk when it comes to AI for children is how it may affect their behaviour and development. Children are more malleable and impressionable than adults. So, could misused AI encourage bad habits? (Such as tech addiction, for example.) Could AI analytics shape the worldview of children through the videos and content that it recommends to them?
There’s also the risk of AI’s impact on social development and communication skills. We don’t always interact with AI in the same way we interact with humans. If children are interacting with AI more than others, then there’s a question as to whether this could have negative consequences.
For all the risks and ethics to consider, it’s also worth looking at some of the potential uses of AI for children.
Facial recognition AI could support the right to safety, by helping authorities find lost or abducted children.
AI could support the right to education by aiding teaching and learning, in both classrooms and during playtime with AI toys. After all, AI can already answer our questions — it represents an easy way for children to access information.
AI chatbot friends could help with social development. For instance, they could provide a safe place to talk about bullying for children afraid to tell adults. Or act as a friend for a child that’s lonely. Or, even support kids with learning disabilities by allowing them to practice social interactions in a safe space.
AI for children
The issue is, when AI for children causes harm, that harm can last well into their future, following them into adulthood.
But AI for children also has a huge potential to do good — improving learning, development, safety and opportunities.
It’s important that the discussions are held now, and that AI is designed without forgetting its impact on the youngest amongst us.