What is the ethics of artificial intelligence

Ethics and Artificial Intelligence

Dr. Olga Levina January 14, 2020

Ethical issues in the design and application of AI-based systems

While information technology solutions that contain AI components are gradually being used in more and more industries, the need for a social and moral discussion of their design and their effects is also increasing. The role of ethics within the design cycle of AI-based systems and why non-observance can have social, economic and technical consequences should be described here.

Information technology applications based on the methods of artificial intelligence have become an integral part of our everyday lives. But the relief they offer us when interacting with our environment (mobility) and when searching for and diversity of information (Internet search services, content streaming services) is increasingly accompanied by growing social concerns about these technologies. The complexity of the systems, the opaqueness of the result paths as well as the resulting actions mean that the gap between technologies and society is widening. This reduces trust in the quality, but also in the benefits of the AI-based solutions.

The question of how the emerging information technology affects society and individuals was asked in 1950 by Norbert Wiener in his book "The Human Use of Human Beings - Cybernetics and Society" [1]. It was not until 2002 that Value Sensitive Design (VSD) was conceived as an approach to application development that incorporated ethical aspects and values ​​into the development of information systems [2]. So far, however, this approach has not spread very much in the professional community. Although the integration of ethical aspects into IT artifacts is increasingly being addressed in society and there are numerous calls for integration of these aspects into system design by research, there is currently a lack of concrete methods, especially when creating systems that use components from the Area of ​​machine learning.

The discussion of the ethical aspects of AI technologies is complex. On the one hand, the definition of the technologies that are summarized under the term AI is not clear because there is no clear definition of the term "intelligence". However, the terms weak and strong artificial intelligence are used to describe algorithmic systems or IT systems that have been designed according to machine learning approaches.

On the other hand, AI encounters precisely what every emerging scientific discipline experiences: the challenge of defining its ethical limits. Research, cost, and privacy issues raise concerns that employers, product providers, developers, and policymakers will face for years to come. It is true that machine learning processes, that is what is now often referred to as "weak AI", were developed at the beginning of the 20th century and their potential effects on society and individual individuals have already been discussed as a matter of discussion. However, the discipline, and with it its social effects, has only received increased public attention in recent years due to the rapid spread of the technologies used.

At a time when the processes of automation and data processing are developing with increasing speed, the number of their socially relevant applications is increasing accordingly. However, along with these advances are moral, economic and political dilemmas surrounding the development and use of these products. These conflicts will multiply as groups with very different viewpoints and resources, such as large industrial corporations and non-governmental organizations, struggle to align the "correct" use of AI.

Ethics and AI - how do they fit together?

Ethics is a branch of philosophy and is often referred to as practical philosophy. Thus it deals with the justifiability and reflection of the moral principles for the behavior or action of a person or group of people. In other words, this term is used to negotiate rules or decision-making processes that are intended to help determine what is considered good or right in the respective society.

The ethics of AI-based systems is a sub-area of ​​applied ethics and deals with the questions raised by the development, introduction and use of AI-based systems for the actions of individuals in society as well as for the moral norms of a society . The focus is on the question to what extent AI-based systems can improve the lives of individuals in the respective society or what concerns, e.g. B. with regard to the quality of life or the necessary autonomy and freedom of the human being for a democratic society.

The numerous current efforts by states, institutions and industry to develop guidelines for the design of "good" [3] and "socially acceptable" [4] AI systems show that this issue in society is linked to questions, fears and expectations meets. But the need for a discussion of the values ​​that are essential in the development of AI-based systems does not only exist on the side of the users. In 2019, around 28 percent of people who work in IT were already confronted with decisions in their everyday work that could have negative consequences for users or society. In the case of people who develop AI-based systems, the figure was as high as 59 percent [5]. These numbers are an indicator that an ethical discussion is necessary and desired, especially in a technical context.

Effects of the AI-based systems

Applications based on machine learning processes have the potential to make decision-making processes efficient, but also to cause considerable damage. In the context of AI-based systems, damage occurs when a prediction or an end result negatively affects an individual's ability to establish his or her rightful personality and this ultimately leads to an influence or impairment of his ability to access resources becomes [6]. This also refers to so-called representation damage or allocation damage. Several examples of such cases have been discussed in the media and new cases are constantly being added. The reporting highlights use cases of AI-based systems that are based on distorted data or whose data categories are designed in such a way that correlations can be found that lead to discriminatory decisions [7,8]. A prominent example of reputational damage is the system that Amazon had introduced to select applicants [9]. The machine-assisted issuing of loans and insurance policies can lead to allocation losses [10, 11].

Ethical issues can also have economic implications for the companies that use these practices. Alphabet, the parent company of Google, and Microsoft recently warned their investors that the misuse of AI algorithms or poorly designed AI algorithms pose ethical and legal risks for the companies using them [12].

What does socially acceptable AI look like?

As part of its AI strategy, the EU published ethical guidelines for trustworthy AI in April 2019, thereby identifying the issues that will dominate the design of ethical AI [13]. These issues will continue to be a source of controversy for the foreseeable future, forcing third-party providers, users and companies to address the societal impact of some or all of these issues in the medium term. An important requirement is that human ability to act and supervision are retained when the systems are implemented. According to this, AI systems should enable equitable societies by supporting the human ability to act and fundamental rights and not reducing, restricting or misleading human autonomy. Values ​​such as transparency and information autonomy play an important role here. Private individuals or companies who use AI-based systems must be able to assume that the results and suggestions obtained have been designed in a benevolent, target-oriented and appropriate manner.

Another prerequisite for a trustworthy AI-based system is that the algorithms are safe, reliable and robust enough to avoid errors or inconsistencies during all phases of the life of these systems. Particularly in the various areas of application of an algorithm, key figures such as the percentage can be used false positives be decisive for its use.

Due to the progress of digitization in industry and also of individual communication and consumer behavior, a large amount of data is accumulating, which can or already is the basis of new business models. The products created in this way, however, are sometimes associated with data protection and data security costs. The hearings of the management of large technology companies in front of political committees whose business models are based on data processing show that society does not want to accept this without further ado [14].

Socially acceptable AI systems should take into account the full range of human skills and requirements.

Nevertheless, data is the basic building block for training and executing AI-based systems. If the application uses the generated data, the users should have full control over their own data and it should be ensured that the data concerning them are not used to damage or discriminate against them. The demand for fairness and diversity in the design of the systems and their database also belongs in this context. AI systems should take into account the entire spectrum of human skills and requirements, guarantee barrier-free access to resources and opportunities, and allow results and decisions to be traced. This principle of transparency is one of the most important requirements for AI-based systems. Since digital, but above all algorithmic systems, have enormous energy requirements, the aspect of sustainability and ecological effects must also be taken into account when designing these systems. In order to strengthen trust in the AI-based systems or to integrate their results into the existing processes, mechanisms must also be created that allow responsibility and accountability for the systems and their results to be guaranteed.

These general requirements offer a rough orientation for the specific design or the use of AI-based systems. The task now is to bring together the questions discussed in public, which are often concerned with AI-based systems and the (ethical) quality of their decisions, with the ethical and engineering requirements.

Recognize, process, act: the work cycle of an AI-based system

An AI-based system works in the following phases: Recognize, process and act. The ethical requirements and the resulting consequences for the system design should be discussed along these phases.

In the first phase, the Detect, the main focus is on data, its quality and suitability for the targeted problem. Thus, these aspects play a far more important role in the development than in the development of non-AI-based IT systems. In addition to the challenges in terms of the quality and collection of data, the legal issues of data protection play an important role in this first phase. It should be ensured that the data was collected for a specific purpose and in compliance with data protection regulations. Failure to comply with data protection requirements at the beginning of the design process can lead to enormous effort in redesigning the system. Based on these effects, the questions of responsibility and resolution of potentially resulting conflicts are raised here. Accordingly, the aspects of transparency such as the comprehensibility and explainability of the models and results already play an important role here.

The data is used to train the algorithmic system to solve a specific problem. In the area of ​​rule-based systems, the automated solution of tasks takes place through the explicit coding of rules and decision-making paths in the program code. Machine learning takes a different approach. Here the system is supposed to "learn" how to solve problems using numerous examples and supported by corrective actions on the part of the developer. From the point of view of social relevance and ethical questions, the aspects here include the distorted (biased) or incomplete data sets as well as incomplete or inappropriate models to wear. The source of the data and its categorization (labeling) raise their own questions about correctness, appropriateness and semantic robustness in the respective processing contexts. In a striking application, two researchers have now shown the potential effects of data that are not carefully or one-sidedly structured, in this case image data [15].

These aspects of transparency, fairness, robustness etc. are dealt with in a similar way To process of the data picked up again. Here, both a suitable model and an implementation approach should be found that are suitable for answering the question at hand, taking into account the available data, functionality requirements and ethical requirements. It is not only important to keep the error rate of the algorithm low, but also to make the problem-solving process as comprehensible as possible. If the transparency requirement is not met, this aspect lowers trust in the present solution and thus the readiness for use. Which methods are best suited for this and which can be used efficiently in the implementation must be determined on an application-specific basis.

The data processing component implies the integration of the algorithm into the IT system and thus also the testing of the system. The AI-based system is currently viewed by developers as a holistic IT artifact and is therefore primarily tested for its functionality and precision of the algorithm in relation to a specific problem. The effects that the application of the system can have on the user or the effects of the use of algorithmic systems on society have so far been viewed as being outside the focus of software development. However, these aspects must immediately play a role in the decision whether the system is ready to go "live" or not. Because the complexity of the systems leads to a lack of transparency in decision-making and explanation. As a result, it is not immediately possible to understand or readjust the decision criteria. The user gets the impression that the system takes control and power over the information and decisions from the user and thus reduces the autonomy of one's own actions. Due to this interlinking of input data, the processing model and the only partially foreseeable effects, the development process of these systems as well as the developers are now in the focus of society.

Another ethically relevant effect of integrating AI-based systems into existing business processes is pattern recognition on a large amount of data, which forms the basis for the decisions generated. Machine learning products and systems, which have been carefully designed and tested, use a logic network of business rules and social rules to calculate decision-making paths and thus deliver predictable results for defined inputs. This logic is usually recorded in the use case or sequence diagrams, which in turn are e.g. B. based on user stories. The business logic and the application scenarios are adopted as they were and are currently being lived. As a result, the current business practices that embody the ethics of the company or the implementation team are implemented. However, through the formalization and concentration of the implemented logic in algorithms and the implementation of their results as suggestions or decisions, the existing prejudices or possibly unintentional discriminatory business practices are manifested and demonstrated at the same time.

These effects arise as the results of the last phase of the AI-based process of an IT system, that of the Acting.

In addition to the aforementioned damage to representation and allocation, which affect individual individuals as a result of the decisions made, the action results of an AI-based system can also include suggestions such as the selection of music or films, decisions such as decision support systems or direct actions such as industrial robots.Each of these types of results can trigger its own societal challenges. Thus increasing automation can lead to the loss of jobs in the areas with the low qualification requirements. The algorithmic preselection and presentation of news and information also leads to effects that are present in public discussion under the keywords "filter bubble" and "fake news". The lack of trust and the lack of transparency in the criteria for decisions made and the transparency of the processes in the system mean, on the one hand, that the acceptance of information technology in general and of AI-based systems in particular suffers. On the other hand, the algorithmic decisions cannot always be reproduced, corrected and adapted. In many cases, such systems are adapted after their implementation and based on the decisions already made. This creates the impression that the actual live operation is used to evaluate the system results. Especially in the area of ​​public administration, where social benefits, supply measures etc. are assigned and administered, this is a path that is socially incompatible [16].

What should be done from the IT point of view?

From these considerations it becomes clear that ethics and IT have to be brought together and brought together. The concrete steps are currently left to the individual companies or developers. The guidelines and codes of values ​​are currently the tools of ethics. However, these are not very effective in everyday software development because they are too abstract for direct implementation in a software system. Characteristics such as transparency, fairness and traceability of systems and their decisions must therefore already be taken into account and integrated within the development and management team in the course of the requirements elicitation. A diverse team of developers is already a step in the direction of designing systems that are non-discriminatory and fair.

Ethical questions are negotiated depending on the context among those involved or within the framework of social conventions. In the context of the development of AI-based systems, they serve to promote the well-being of society and the individual and to initiate innovations that explicitly deal with these questions. For their design, this means that, as with business rules, there will be no generally applicable concrete implementation patterns. However, it should be possible to fall back on best practices, standards and aspects at the level of the development phases, the domain application and special requirements. So now it's a matter of actively designing these examples and tools. The exploration and integration of the ethical aspects in the digital solutions can only be done through ongoing discussion and continuous cooperation with users and developers. Otherwise, the technical solutions will shape our social values ​​and not the other way around.

Dr. Olga Levina

Olga Levina received her doctorate from the Technical University of Berlin and is currently researching digitization topics as a postdoctoral researcher at the FZI Research Center for Computer Science in Berlin.
>> Read more
You might also be interested in