The EU AI guidelines explained



Concerns and ethical queries have circled artificial intelligence since its dawn. But with AI entering our daily lives in ever-more ways, we much find a way to accept it. And to do that, it needs to be trustworthy.

In the pursuit of this trustworthy AI, we now have the EU AI guidelines. These guidelines outline what makes for ethical, trustworthy AI systems, as well as measures for creating it.   

Here, we explain the EU AI guidelines, what they mean, and the requirements they recommend.


What is trustworthy AI?

First things first, the EU AI guidelines represent a push for trustworthy AI. So, it’s important to understand what makes an AI system ‘trustworthy’.  

The guidelines state that there are three key elements to a trustworthy AI: it must be lawful, ethical and robust. These criteria work together to form trustworthy AI.

Lawful

Lawful means that for an AI to be trustworthy, it must comply with all applicable laws and regulations. The EU AI guidelines take the stance that any AI must always be lawful as a given. So, they focus more on striving for the other two aspects of trustworthy AI.

Ethical

Ethical means that an AI system must respect ethical principles and values. This is an area that can pose problems. Namely, what counts as ethical varies from place to place.

Robust

Robust AI is an AI system that’s secure, safe and reliable. It’s closely related to ethical AI, with a focus on both the longevity of the AI and the reduction of potential harm.


Breaking down the ethics

The EU AI guidelines focus on the ethics of AI, which is a highly nuanced field. It’s full of conflicting views and ideas about what’s ethical. So, the guidelines outline four broad principles to achieve.

Respect for human autonomy

An ethical, trustworthy AI system is one that respects and preserves human freedom and autonomy. (The ability to make your own decisions and govern yourself.) This means that ethical AI use will not involve deceiving, coercing, or manipulating anyone. Instead, AI should empower and assist human skills.

Prevention of harm

A principled AI will not cause harm nor amplify it. With AI systems entering more and more critical functions, there’s a need to guard against AI-caused harm. This could be a loss of privacy or dignity, or physical harm, for instance.

Fairness

Fairness is, itself, a broad and complicated topic open to different interpretations. However, in general, fairness in AI is about freedom from bias, discrimination and stigmatisation. This means that the distribution of the benefits and costs of AI use are equal. And, that AI outcomes aren’t harmful or restrictive — particularly to vulnerable groups.

Explicability

To trust AI, we need to understand it. We need to know how it reaches the answers it does. This means transparency and explicability. Explicability is more than AI showing its work, though. It means that AI explanations must be clear and interpretable.


Making it happen: the requirements

From there, the EU AI guidelines outline seven key requirements. These are the things needed to achieve the ethical principles put forward. (And, by extension, robust, trustworthy AI.)

So, what are these requirements, what do they entail, and how can you achieve them?


1.      Human agency and oversight

The first key requirement in the EU AI guidelines is human agency and oversight. It supports the ethical need for respect for human autonomy.

This requirement posits that AI systems should support human autonomy and decision-making. Artificial intelligence design, development and use should revolve around empowering humans. This means supporting informed decisions and actions. It also includes the right to not be subject to decisions based only on AI conclusions and automated processing.  

Oversight, meanwhile, requires that a human can intervene or play a role in the outcome of the AI process.  The guidelines offer a few ways to achieve this.

A human could have the ability to intervene in any decision an AI makes. (Known as human in the loop.) They could intervene in the design of the AI, and oversee its operation. (Known as human on the loop.) Or, a human could monitor the overall activity of the tool, and decide when and how it’s used in any given situation. (Known as human in command.)


2.      Technical robustness and safety

The next requirement in the EU AI guidelines revolves around the principle of preventing harm. It’s also key to ensuring the robustness of the AI tool. Achieving this means making your AI tool resilient, safe, accurate and reliable.

The system is secure and protected against cyberattack, vulnerabilities and other threats. This security applies to every aspect of the AI and its development — from its training data to its hardware.

The AI system in question should have safeguards in case of problems. For instance, alerts asking for human assistance.

Accuracy in artificial intelligence relates to the output of an AI system. An accurate AI tool is one that generates correct output — whether it’s a classification, prediction or decision. So, the AI labels a picture of a Shih Tzu as a dog, or better, as a Shih Tzu. The predictions it generates prove true, and so on.

Finally, reliable AI works as intended with a range of different inputs, and offers consistent results. This means that the same input should generate the same output each time.


3.      Privacy and data governance

Privacy and data governance is another requirement that relates to the need to prevent AI-fuelled harm.

Artificial intelligence takes and uses a lot of data. There’s the training data, user input, any data generated or inferred by the AI and so on. An ethical, trustworthy AI must guarantee the privacy and protection of all this data for its entire lifecycle.

For a start, companies developing and/or using AI also need to manage who can access any of the AI’s data.

Then there’s the data governance aspect of this requirement. This revolves around ensuring the integrity of data, protecting it from malicious input. Data governance is a way to address and prevent algorithmic bias. It makes sure that data is not compromised, incomplete or non-representative of reality.


4.      Societal and environmental well-being

The next requirement acts to support both the principles of preventing harm and ensuring fairness. Here, the EU AI guidelines posit that developers and users of AI should consider the impact it may have on the environment, individuals, and society.

This means that AI systems should be as sustainable and environmentally friendly as possible. The social impact of any given use of AI should also garner careful evaluation. For instance, AI taking a job could harm a person socially.

Beyond this, the impact of AI use on society should also face assessment. The goal is to ensure that the AI is helping, not harming.


5.      Diversity, non-discrimination and fairness

In the push to ensure fairness in an AI’s development and use is the need for diversity. Diversity, non-discrimination and fairness are all about ensuring this requirement.

Achieving diversity in AI means ensuring that the system is free from unfair bias. This is where algorithmic bias represents the most danger. An AI system can pick up on unconscious or unnoticed biases and amplify them. Addressing this is about using diverse training data, and regularly reviewing the output that an AI tool offers.   

There’s also the need for accessibility. A diverse, trustworthy AI system is one that anyone can use. Regardless, that is, of race, age, socioeconomic factors, or ability.

The EU AI guidelines recommend consulting relevant stakeholders (those potentially affected by an AI system) throughout its lifecycle.  


6.      Accountability

Another requirement in the EU AI guidelines that relates to fairness is accountability. Accountability in artificial intelligence is about having someone to hold responsible for an AI and its outcomes.

In the guidelines, accountability involves auditability. This is the ability to assess the algorithms, data and design of an AI system. For critical systems, an independent party should conduct such audits. Alongside this, AI users should identify, assess, document, and minimise any negative outcomes caused by the AI tool.

In some cases, accountability measures can conflict with other ethical protocols. In such an instance, those involved should use their best judgement to resolve the issue.


7.      Transparency

Transparency is an oft-discussed tenet of ethical AI. In the EU AI guidelines, it’s a requirement that seeks to ensure explicability.

Otherwise known as explainable AI, transparency in artificial intelligence involves two key elements. The ability to see how an AI reaches a decision, and the ability to understand that reasoning. This means that the human interacting with an AI must understand the explanations that it provides for its decisions.

Transparency also means that humans should always know when they are interacting with — or otherwise using — and AI. Moreover, it’s important that they’re aware of both the capabilities and limitations of the tool. This is key to ensuring they can make informed decisions.


EU AI guidelines explained

The EU AI guidelines recognise that there are risks as well as benefits to artificial intelligence technology. And they place the ethics of AI as the driving force behind AI acceptance. They promote the ongoing development of trustworthy AI, and ways to assess it.

But each system, tool and use involving AI will require different considerations. And, as such, the EU AI guidelines are only an entry point into the massive field of AI ethics and acceptance.


Useful links

AI acceptance and the man on the Clapham Omnibus

Are AI ethics impossible?

Is AI transparency helpful or harmful?


Download