AI and Ethics
 
The Ethics of Artificial Intelligence: Part 1 of 2: a very well-researched and in-depth article by our Qymatix guest author David Wolf.

Whether in production, marketing, sales or logistics – artificial intelligence optimizes processes and takes over routine tasks. People also hope for better, i.e. more objective, decisions from AI.

That is where it gets tricky. How can machines “decide” objectively if those who develop the algorithms cannot? A plea for the ethical evaluation of AI based on its respective impact on society.

It is the year 2029. A right-wing populist government leads Germany. Journalist Johann Hellström, who has been banned from working, retreats with his wife to his luxurious weekend house on an island. “Das Haus,” the title of the 2021 German feature film, is an extraordinary one. Not only is it luxurious, but it’s also a smart home through and through. Fully digitized. Artificial intelligence is directing the action. It automatically takes care of the food supplies, regulates the washing water temperature, and, if necessary, locks the entire building. The AI is fully adapted to Johann Hellström and his needs so that his wife has trouble with the technology more than once. The AI ignores her commands.

When I watched the film starring Tobias Moretti, I was fascinated and shocked at the same time—fascinated by the many things that artificial intelligence can do for us humans – at least theoretically in the feature film – if it is programmed accordingly. Shocked by the tragic scenario at the end, when the AI switches to alarm mode and completely shuts down the smart home system. Because his landlord can’t make it outside in time, he remains locked inside his own four walls. I couldn’t tell if the alarm mode corresponded to a programmed scenario or if the AI had arbitrarily “decided” to seal off the house hermetically. Creepy.

But is it all fiction? After all, at the time of its premiere, the film was set only eight years later.

Occasions for an Ethical Artificial Intelligence

Smart homes and smart living are no longer dreams of the future. There are already intelligent refrigerators that have an integrated connection to the mobile network or the Internet and can automatically reorder food running low.

But the possibilities of AI still need to be exhausted.

“Soon, each of us could live in a house or apartment with an AI-based electronic doorman. This doorman lets in tradespeople, delivery and postal services when we are not at home. He ensures that only those rooms are entered that we have cleared for tradespeople, for example. The doorman also passes on instructions and, conversely, takes messages for us.”

This assessment comes from the German Federal Ministry of Economics and Climate Protection (BMWK) and refers to the “ForeSight” project, funded as part of an AI innovation competition.

A platform for developing and operating fully networked buildings. According to its statement, ForeSight focuses on people as self-determined individuals when developing new methods and sees artificial intelligence as more than just data and technologies. In addition to specific security aspects, i.e., questions about protecting against the misuse of a system controlled by AI, the focus is also on ethical issues.

For example, how to ensure that the AI does not “willfully” decide, without regard to humans, who is allowed into the building and who is not. One conceivable – and unattractive – scenario could be that the AI decides, for example, whether to grant them access based on a person’s skin colour.

ZEIT Online already discussed this in a detailed article in 2018, reporting that Google software labelled the photo of an African-American woman “gorilla. Or: Whenever someone entered the search term “professional hairstyle” into the search engine, only blond braided hairstyles appeared in the first results of the image search. How can something like that happen?

Human Decisions are Never Truly Objective

The fact that AI, as in the examples mentioned, virtually takes on a life of its own and only delivers specific results originates from what trains the algorithms: human-generated data. We want computers and machines to make our lives easier, which is also the case in many ways, such as automating unpleasant routine tasks. On the other hand, we also hope for better and, above all, more objective decisions from algorithms.

But therein lies the fallacy. How are machines supposed to be able to make objective decisions if they are fed and trained with data that contains the human dilemma of decision-making? It is this: no decision is genuinely objective! Every decision we make incorporates sober facts, half-knowledge, individual values, preferences, perceptual biases, expected benefits, and learned biases.

If we become aware of this, we cannot avoid urgent questions of an ethical evaluation of AI. Ethics is the branch of philosophy that deals with the presuppositions and evaluation of human action. Ethics is the systematic reflection on morality. At its centre is a moral act, especially regarding its justifiability and reflection. The ethics of AI-based systems, in turn, is a subfield of applied ethics and is concerned with the questions raised by the development, introduction, and use of AI-based systems for the actions of individuals in society. Since humans are creators and thus the starting point of intelligent machines and systems, we must start at the very beginning – with ourselves – and ask ourselves: what do we consider good or right in society? What do we want, and what do we not wish? Ultimately, what kind of society do we want to live in?

It is undisputed that artificial intelligence makes our lives more straightforward in many areas. For example, when planning a route, the Google Maps navigation program considers routes on which emergency braking is particularly frequent. The AI behind it sorts them out if the alternative way takes little time. AI is also used in business, for example, to streamline processes in sales, optimize workflows and relieve employees of unpleasant routine tasks. With the help of AI tools, such as predictive analytics, it is possible to analyze large amounts of data from different sources. That saves employees from making time-consuming predictions and forecasts, which are much more accurate with the help of AI.

However, the truth is that AI systems can also be misused to affect entire societies’ daily lives. A sad example is the Social Credit system of the People’s Republic of China. The online rating or social scoring system represents an attempt at total control of the population by awarding “points.” It is based on the social and political behaviour of private individuals, companies and other organizations, which the ruling Communist Party considers desirable. The question of what kind of society we want to live in is unequivocally answered here in a profoundly inhumane way.

Ethics Guidelines for Trustworthy AI Systems.

Europe has been guided by a contrary and desirable view of humanity regarding how AI should be developed and used. In 2018, the EU Commission commissioned a high-level expert group on AI to advise on a unified, ethical AI strategy. After months of discussions, the result was ready in spring 2019: a 40-page paper with ethical guidelines for developing and using trustworthy AI systems. These claim to support developers and users of AI systems in living up to fundamental ethical principles. The guidelines set three basic requirements for trustworthy AI:

1. AI should be legally compliant and follow all applicable laws and regulations.
2. ethical principles and values should guide AI.
3. AI should be technically and socially reliable.

The guidelines cite the four fundamental principles of the European Union and declare them to be the general basis of an AI ethic: respect for human self-determination, harm prevention, fairness, and explainability. What constitutes trustworthy AI is fleshed out based on seven requirements:

● Primacy of human action and oversight.
● Technical robustness and security
● Privacy protection and data quality management.
● Transparency and explainability
● Diversity, nondiscrimination, and fairness
● Social and environmental well-being.
● Accountability.

Although the guidelines do not have the status of official regulation or even law, they do provide guidance and an essential framework for the socially desirable development and use of AI.

Most notable is the emphasis on human autonomy. Self-controlling systems that replace human decision-making power and responsibility are not trustworthy AI in this sense. Humans, not technology, are in charge. And artificial intelligence can never be ethical by itself. Therefore, there can never be such an “ethical algorithm” that calculates the ethical and the unethical.

The fact that the European Union is serious about human autonomy is demonstrated by the only law that explicitly regulates artificial intelligence – the General Data Protection Regulation (GDPR). Among other things, it determines which obligations must be observed when processing data and which rights data subjects have when using personal data. The GDPR thus comes into play when AI is fed with personal data, when it uses it, or when it serves as the basis for decisions.

In addition to the Ethical Guidelines for the Development and Use of Trusted AI Systems and the GDPR, several national and international initiatives have emerged in recent years that legislators still need to initiate. Numerous private actors have also taken self-regulatory measures. The Bertelsmann Stiftung’s so-called “Algo.Rules” are a toolbox for developers, programmers and designers of automated decision rules. The practical guide was developed together with the think tank “iRights.Lab” and contains nine algorithmic systems design rules.

“Ethics in Conversational AI: What should a bot, and what should it not?”

We continue with this topic in the second part of our AI and Ethics series.

I WANT PREDICTIVE ANALYTICS FOR B2B SALES.
 



Leave a comment