Since June 2023, the “AI Act” has been taking shape. With the AI Regulation, the EU Parliament is for the first time planning a legal framework for the development and use of artificial intelligence (AI) – also known as the “AI Act”:

“The European AI strategy aims to make the EU a world-class hub for AI and ensure that AI is human-centric and trustworthy.”European Comission.


During this regulation, the terms “explainable”, “responsible” and “trustworthy” AI are often used.

Let’s take a closer look at these three terms. What do they mean, and what characteristics must an AI system possess to be considered responsible, explainable and trustworthy?

Let’s dive in!

Explainable Artificial Intelligence

We start with Explainable AI because it is an essential part of Responsible and Trustworthy AI, as these two terms encompass significantly more requirements.

Have heard of the so-called “black box of machine learning“? Probably yes. his black box describes the following phenomenon: a machine learning (ML) algorithm is fed with data, and we get a particular result. But how the algorithm arrived at that result, i.e., exactly what correlations and patterns it has found, is unknown. Not even the engineers or data scientists who create the algorithm can understand or explain what happens in the black box and how the AI algorithm arrives at its results. However, what is – or should be – known is the data used and the statistical models on which the algorithm is based.

Explainable AI, or XAI (“Explainable AI”), is a field of research that uses a variety of methods to shed light on the black-box models of machine learning and do so based on known parameters. It aims to describe AI models, expected effects, and potential biases.

Let’s take as an example an ML algorithm that predicts customer churn based on historical sales data of a company. Various parameters have been used to make the prediction, such as what, how much and when the customer bought, when the last customer contact was and whether there were any complaints. As a result, we are told, “Customer Z has an 80% chance of churning.”

One solution for explainable AI here is to examine the influence of the input data on the forecasted result. You could remove a parameter and see if and how much the forecast changes. If the forecast differs significantly from the previous result, then the parameter has a strong influence. That is just one way to make ML models more explainable. The principle sounds straightforward with few parameters but is a significant technical effort with many data and parameters. The most complex form of machine learning to understand is the neural networks used in Deep Learning.

But the benefits of explainable AI are worth the effort. Aside from the fact that explainable AI could become a regulatory standard in a few years, it helps developers ensure that the system works as expected. It also provides excellent assurance to end users and can lead to promoting trust in AI in the business world or society. That is also why explainable AI is essential for responsible and trustworthy AI.

The Problem with Responsible and Trustworthy Artificial Intelligence

Let’s get straight to the point: there is no standard definition or framework on what characteristics an AI must meet to be classified under either term.

There is a multitude of proposals and technical (philosophical) discussions about the terms. That is not surprising since we are at the beginning of this field, and regulations must develop first.

The two terms are also inextricably linked, are used almost synonymously, or have at least some overlaps. However, there is a tendency for Responsible AI to refer more to the creation and development of AI and Trustworthy AI to refer to its use.

This makes it difficult to separate the two terms, as the development and use of AI influence each other. And that, in turn, makes it difficult for AI companies to develop systems that are officially considered trustworthy and responsible.

Responsible Artificial Intelligence

Roughly speaking, Responsible AI refers to all efforts to develop AI systems in a responsible manner. According to the Gabler Wirtschaftslexikon, this includes explainability (Explainable AI), trustworthiness (Trustworthy AI), data protection, reliability and security.

Responsible AI is therefore a term that encompasses several criteria that an AI system should fulfil. And these criteria are not standardised.

This has led to criticism. For example, Prof. Dr Oliver Bendel, Professor of Business Informatics at the FHNW, comments as follows:
“It continues to be a question of who defines what is responsible in the first place and who benefits from the fact that certain systems emerge and others do not. Ultimately, “Responsible AI” is a diffuse term that raises high expectations but hardly fulfils them.”

“Responsible AI” also leads to misinterpretation, as was brought up in a technical discussion between two AI experts at Northeastern University:

“The term still allows for the interpretation that AI has some responsibility, which we certainly don’t mean. We try to emphasize that responsible AI is about creating structures and roles for responsible AI development and that the responsibility will always lie with those structures and the people who develop the systems.”

Trustworthy Artificial Intelligence

The goal of Trustworthy AI is to design and deploy systems that work reliably and as expected. The thesis here is that trust comes from meeting expectations. According to Deloitte’s framework, these include the following criteria: robust to unpredictable situations, protected against cyber risks, transparent and traceable (explainable), and fair and unbiased.

Adesso has also created a framework for “Trustworthy AI,” which is even more far-reaching. This framework also incorporates external factors, such as legal requirements, certifications and standards.

In principle, all proposals for Trustworthy AI aim to build a stable trust framework to make human-AI collaboration successful and sustainable.

However, here too, there is no consensus, so the criticism of Responsible AI can be applied as well.

 
CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE
 

Explainable, Responsible and Trustworthy Artificial Intelligence – What is Behind? – Summary

Explainable AI is technically challenging for AI developers, but the benefits outweigh the effort. Also, unlike the other two, the concept is relatively transparent and therefore easier to implement.

Responsible and Trustworthy AI is not so easy. We still need a uniform definition. The distinction between the terms is blurred. There are many articles where “Trustworthy AI” has been described as exactly what other articles mean by “Rresponsible AI”.

Nevertheless, the intention behind the terms is essential for the safe use and development of AI systems. For AI companies, understandable and actionable guidance on Responsible and Trustworthy AI certainly contributes to the continued and safer use of AI systems.

Many AI research and development companies fear that overly strong regulatory laws will stifle important AI innovations.

It will be interesting to see where the AI world goes from here. The jury is still out.

I WANT PREDICTIVE ANALYTICS FOR B2B SALES.
 

Further Read:
 

Tagesschau 14.06.2023: EU-Parlament einigt sich auf Position zum KI-Gesetz

Deloitte: Explainable AI (XAI): Trust through Transparency

Deloitte (2021): Trustworthy AI: Künstliche Intelligenz benötigt die passende Kontrolle

IBM: What is explainable AI?

Gabler Wirtschaftslexikon: Responsible AI

Christoph Kurth (2021): Die vier Grundpfeiler einer verantwortungsvollen KI. Hg.: Big Data Insider

Adesso (2022): Trustworthy AI – das ganzheitliche Gerüst und was davon zukünftig geprüft wird