Masterarbeit, 2024
68 Seiten, Note: 1
Philosophie - Praktische (Ethik, Ästhetik, Kultur, Natur, Recht, ...)
List of illustrations
List of abbreviations
List of tables
1. Motivation, objectives and structure of the work.
1.1. Motivation
1.2. Objective
1.3. Structure of the work
2. Theoretical foundations
2.1. Responsible Innovation
2.2. Autonomous driving
2.2.1. Vision and the current state of the art
2.2.2. Ethical challenges
2.3. Development models in the automotive industry
2.3.1. V-model
2.3.2. Systems Engineering
2.3.3. ASPICE.
3. Ethical perspectives and their significance for autonomous driving
3.1. Overview and selection of ethical perspectives
3.2. Risk ethics
3.3. Deontological ethics
3.4. Consequentialist ethics
3.5. Case study - How can ethics help the developer in practice?
4. Responsible Innovation 2.0 - A practical target concept for the automotive industry
4.1. Summary overview of the target concept
4.2. Explanations of the target concept
4.3. Transfer to the development models
5. Conclusion and outlook
Appendix
Bibliography
Figure 1: The Gartner Hype Cycle - development of the topics "autonomous vehicles" and "artificial intelligence"
Figure 2: Extract from the Tesla Autopilot FAQ
Figure 3: Visualisation of the structure of work
Figure 4: SAE level of autonomous driving
Figure 5: The trolley dilemma
Figure 6: The product development process and underlying development models
Figure 7: The V-model
Figure 8: The Systems Engineering V-model
Figure 9: Overview of the process reference model according to ASPICE 4.0
Figure 10: ASPICE 4.0 - Certification level
Figure 11: Three perspectives of the economic-ethical connection
Figure 12: The V-model in terms of the target concept
Figure 13: Systems engineering in terms of the target concept
Figure 14: ASPICE in terms of the target concept
Figure 15: Product details for Tesla's Autopilot in the vehicle configurator (1/3)
Figure 16: Product details for Tesla's Autopilot in the vehicle configurator (2/3)
Figure 17: Product details for Tesla's Autopilot in the vehicle configurator (3/3)
Figure 18: Ethical evaluation concepts.
Table 1: List of abbreviations
Illustrations are not included in the reading sample
Source: Own compilation (2024)
Table 1: List of abbreviations.
Table 2: Risk ethics - overview of pros and cons.
Table 3: Deontological ethics - overview of pros and cons.
Table 4: Consequentialist ethics - overview of pros and cons.
Table 5: Summary overview of the target concept.
Table 6: Transfer of the target concept to the selected development models of the automotive industry
Table 7: Overview of SAE levels for autonomous driving (incl. comparison with classification according to BASt and NHTSA).
Table 8: Potential benefits of autonomous driving.
Table 9: Overview of ethical perspectives (detailed).
Table 10: Gethmann's rational concept of risk
Many of the recent disruptive changes in society are primarily driven by advancing technological developments. Artificial intelligence (AI) is probably the most intensively discussed topic at the moment, having caused a worldwide sensation with the publication of ChatGPT in November 2022, for example.1
Since then, a real trend has emerged around AI innovations, which is permeating society. However, in addition to the positive potential, the negative side of this development is also rightly viewed with concern.2 So-called deepfakes (images and videos generated by AI) create false statements and are used specifically for political manipulation, among other things.3
As a result, there is a need for mechanisms to control or, more importantly, to control the use of such innovative technologies. The EU recently passed the EU AI Act for the field of AI, which sets limits for the use of AI, particularly from a legal and ethical perspective.4
The Gartner Hype Cycle classifies innovations, such as AI, and assesses their prospects.5 It is often the case that a certain disillusionment sets in with various hypes when the hoped-for successes fail to materialize in the short term.
Another trend that is currently in the valley of tears is autonomous driving, which, according to Gartner's analyses, will probably take more than 10 years to achieve the hoped-for breakthrough (seeFigure1 ).
Illustrations are not included in the reading sample
Figure 1 : The Gartner Hype Cycle - Development of the topics "autonomous vehicles" and "artificial intelligence"
Source: Own creation (2024) based on (Gartner; Perri 2023; Gartner 2015; Gartner and Perri 2024)
But why is that the case? Didn't Tesla boss and technology pioneer Elon Musk confidently announce in 2019 that autonomous driving would be on the road across the board in 2020? Yes, the announcements were made and several more attempts followed, but in reality, the technical development of such complex systems proved to be much more difficult than expected.6
In the meantime, Tesla has already been overtaken in the development and approval of such systems responsible for autonomous driving by companies such as Mercedes and BMW. The two German OEMs have received SAE Level 3 approval for their autonomous driving systems (conditional automation for highways, for example, see section 2.2.1). By comparison, Tesla's system has so far only achieved Level 2 (partial automation). Another example alongside the traditional OEMs is the company Waymo (a subsidiary of Google) as a developer of so-called autonomous robot cabs, which has also overtaken Tesla with an SAE level 4 (even if autonomous driving is limited to certain regions or cities).7
Tesla customers, on the other hand, are offered a so-called autopilot, which already suggests from the name (and also at various points along the ordering process, seeFigure 15Figure 16 andFigure 17 ) that the vehicle is capable of moving completely autonomously. However, there is a gap between the desire and reality of the system, because Tesla points out rather covertly in the FAQ that Autopilot does not make the vehicle fully self-driving or autonomous (seeFigure28
Illustrations are not included in the reading sample
Figure 2 : Excerpt from the FAQ on Tesla Autopilot
Source: https://www.tesla.com/de_DE/support/autopilot, accessed on 13.07.2024
However, this was only added when Tesla was convicted in court of misleading consumers. Until then, and still today, direct consumers and customers are led to believe that the functionality is different.9
A rather marketing-based misleading of the customer is one thing, but recently there was the first verdict in a court case in the USA following a fatal accident while using Tesla's Autopilot. A jury found human error due to the influence of alcohol and thus exonerated Tesla. This is just one of many legal cases currently being investigated by the U.S. National Highway Traffic Safety Administration (NHTSA) in connection with Autopilot.10
Under a legally binding penalty, the NHTSA has requested a massive amount of data as information from Tesla. The NHTSA is investigating around 1,000 known accidents in connection with Tesla's Autopilot from January 2018 to 2023, in which 29 people have died so far.11
It is quite undisputed that we need innovations such as autonomous driving to improve quality of life, as individual mobility also stands for social participation and also enables better participation in economic life. There is therefore a need to discuss and clarify fundamental issues relating to the use of such technologies. This need arises, for example, from industry or from the political debate. Ultimately, the question arises as to how to deal with such technological innovations - from a legal, economic, social and ultimately ethical perspective. What is the best way to deal with them, how and what to regulate and under what conditions?
Is there perhaps a need for regulatory approaches comparable to the EU AI Act for autonomous driving, or are there other approaches to make the development of such technologies better and therefore more responsible?
Wouldn't one approach, for example, be to not only hold manufacturers accountable as companies ("top-down"), but also to implement the perspective of ethical responsibility in the actual development process of car manufacturers?
For example, every supplier, down to the individual software developer, could already reflect on the significance and possible interaction of their work "bottom-up" and contribute accordingly, with each individual taking responsibility for the technical development of autonomous driving and corresponding systems.
The aim of this thesis is to explore the question of the role of ethics in selected models of technical development in the automotive industry using the example of autonomous driving. Based on the examples from chapter1.1 , the following hypothesis can be derived: As of today, there seems to be no direct consideration of ethical perspectives especially in the common industry standards for technical development in the automotive industry.
The following normative question can be derived as the core of this work: "How can an ethical perspective improve the models of technical development at car manufacturers in such a way that autonomous driving also becomes safer and better?"
In order to answer this question, the goal is approached in several steps:
- The first step is to record selected models of technical development in the automotive sector (see chapter2.3 ).
- In the second step, these models are then analyzed and evaluated with regard to the consideration of ethical aspects (see chapters ,2.3.12.3.2 and2.3.3 ).
- In the third step, possible ethical perspectives are identified and evaluated in terms of their significance for autonomous driving (see chapter3 ).
- In the fourth step, a target concept for the implementation of ethical perspectives in the selected development models of the automotive industry is outlined (see chapter4 ).
The overall aim is to create a target concept for Responsible Innovation 2.0 that combines a wide range of ethical perspectives, both bottom-up and top- down, and integrates them into the selected development models of the automotive industry. This should create the basis for qualitatively better and more responsible technical development in the automotive industry, which creates added value for the development of autonomous driving.
This thesis is divided into five chapters, the structure of which is visualized in the followingFigure3
Illustrations are not included in the reading sample
Figure 3 : Visualization of the structure of the work
Source: Own compilation (2024)
Chapter1 contains the introduction and motivation for the topic of this thesis as well as the formulation of the objectives.
The chapter2 introduces the theoretical foundations of this work and focuses on the selected development models of the automotive industry. The selection of the models is based on their prevalence or establishment as a (de facto) sector or industry standard.
The subsequent chapter3 analyzes and discusses which ethical perspectives exist and which are relevant in terms of the concept of "Responsible Innovation 2.0".
In chapter4 , the ethical perspectives identified above are then transferred to the analyzed development models of the automotive industry by means of guidelines derived from them. It is discussed how these can be implemented both bottom-up and top-down. The result of the chapter is an overview of possible approaches.
The final chapter is0 , which draws a conclusion to this work and once again focuses on answering the questions from the objectives (see chapter 1.2 ). It also provides an outlook on how the results of this work could be used in the future.
As described above (see chapters 1.1 and 1.2), this chapter explains the theoretical foundations relevant to answering the questions posed in this thesis. The relevant sub-chapters will deal with the following...
- the principles of responsible innovation (see chapter 2.1),
- autonomous driving (see chapter 2.2 ) and
- selected models of technical development in the automotive industry (see chapter 2.3)
Finally, the connection between these topics and aspects in the context of this work is briefly summarized.
Innovations (from the Latin innovatio: "renewal" and/or "change") are seen as an important driver of economic development. The term innovation is often used synonymously for new ideas and inventions as well as their economic implementation.12
One of the founders of modern innovation theory and research is the Austrian economist and politician Joseph Alois Julius Schumpeter, who, among other things, worked out the difference between an invention and innovation as part of his innovation process.
The development of innovations is closely linked to the question of ethics and therefore, for example, our own responsibility. After all, what does it mean when we make technology, an innovation (uncontrolled) available to society? What consequences could an innovation have and what does this mean for people and their participation in the development, their individual responsibility, in that very innovation?
A catchy and historically significant example of this conflict between innovation and ethics is the development of the atomic bomb (the so-called "Manhattan Project"). The famous quote "I became death, the destroyer of worlds." by Robert Oppenheimer describes his reaction when he saw the huge explosion and the associated first mushroom cloud in history in the New Mexico desert. We can only guess how Oppenheimer may have felt about this, but it can certainly be deduced from the statement that he had an inner conflict with moral concerns.13
Although Oppenheimer himself never regretted his involvement in the project, after the atomic bombs were dropped on the Japanese cities of Hiroshima (which alone killed more than 140,000 people) and Nagasaki, he was plagued by doubts and fears about the significance of this technology. He subsequently spoke out against an arms race, for example, and other members of the project and third parties also became more critical of the significance of what had been created there. "This thing must not be allowed on earth. We must not become the most hated and feared people on earth, however good our intentions may be," wrote engineer Oswald C. Brewster, for example, in a letter to US President Harry S. Truman.
Even if this example is extreme, it also makes it clear what impact an innovation could have and how closely this is linked to the discussion about the associated responsibility. But who bears this responsibility and where exactly does it begin? Does responsibility begin with politics, society or only with the customer who buys a product?
"Who is in a better position to understand the potential far-reaching effects of these innovation processes than the developers themselves?" is a quote from an article in the renowned IEEE Spectrum, which provides an answer to the question of where responsibility begins. This article deals with the question of what Responsible Innovation (RI) means.14 But what exactly is RI and what does it have to do with ethics and responsibility?
One of the most frequently quoted definitions of RI comes from Dr. Dr. phil Rene von Schomberg, Science Policy Team Leader at the European Commission: "Responsible research and innovation is a transparent, interactive process in which societal actors and innovators engage with each other with regard to the (ethical) acceptability, sustainability and social desirability of the innovation process and its marketable products (to enable the appropriate anchoring of scientific and technological progress in our society) [15]
Although Oppenheimer never regretted his role in the project, he probably realized the significance and scope of what this innovation (and thus RI) meant for humanity and what consequences it would have.
RI is therefore not just about weighing up possible consequences and unintended consequences, but rather about creative aspiration. Namely, what the future should look like - and the basic idea of RI is the invitation to think about what kind of world we want to live in. It is precisely these questions that should (or must) be asked during the development process, because the first answers should come from those who develop the technology or the product.
The so-called Software Defined Vehicle (SDV) describes the shift from the mechanical/mechatronic and thus hardware-based focus of vehicles to a software focus and thus digitalization. Autonomous driving is inevitably closely linked to SDV, as more and more complexity is being incorporated into the vehicle system in the form of software (which then contains the actual intelligence), as well as the necessary sensors and actuators.16
In addition to the already challenging C.A.S.E trends17 (as already explained using the example of autonomous driving in the chapter1.1 ), there are also other innovations and potentials, such as the topic of automation and AI, which could perhaps become key elements for success in the context of autonomous driving.18
Autonomous vehicles will probably find themselves for many decades in situations in which both human and systemic road users will be both active drivers and passive road users. Autonomous vehicles will therefore inevitably find themselves in dilemma situations and will (must) react or make decisions autonomously.19
This means that people today must already be working on the development of systems that will be able to make autonomous decisions in dilemma situations tomorrow (see also chapter 2.2.2).
As a result, the automotive industry is facing perhaps the biggest and most challenging transformation in its history. This development has far-reaching effects on all areas of industry, as it will change the way in which vehicles are developed, produced and used (in the future).
The following subchapters provide an overview of the topic of autonomous driving and explain relevant aspects of the basic vision, the current state of the art and the various challenges (e.g. ethical challenges).
The vision of autonomous driving stands for a fundamental human dream: unrestricted and always available mobility. The freedom to decide when and where you want to go. But this is only one possible formulated vision of autonomous driving.
However, it can already be deduced that autonomous driving represents qualitative added value in a wide range of areas and will contribute to improving people's quality of life. However, in addition to the advantages, there are also corresponding disadvantages and risks, such as fundamental ethical issues, which are symbolically illustrated by the trolley dilemma (see chapter 2.2.2). But where do we currently stand in the development of autonomous driving?
The technical vision of autonomous driving is just as diverse, with three variants having established themselves as the standard in the definition of autonomous driving. These are the so-called SAE, BASt and NHTSA levels.20 The most common are the SAE levels, which are compatible with the BASt and NHTSA levels. The followingFigure4 visualizes these levels and provides a simple overview
Illustrations are not included in the reading sample
Figure 4 : SAE level of autonomous driving
Source: Own creation (2024) based on (SAE International 2021)
A detailed explanation of the SAE levels can also be found in theAppendix in theTable 7
We are therefore currently in the (further) development of Level 3 and 4, with the latter currently being used primarily for robot cabs in limited business areas (e.g. Waymo).21
In the future, autonomous driving will have to meet a wide range of requirements, including safety, efficiency, sustainability and accessibility, in order to meet the needs of people in society.22 Only then can this innovation be accepted and regulated politically, legally and ethically - these challenges are explained in the following chapter 2.2.2 .
The German Federal Ministry of Transport and Digital Infrastructure and its Intelligent Transport Systems and Automated Driving Department published an initial plan of measures by the German government on the report of the Ethics Commission on Automated and Connected Driving (Ethics Rules for Driving Computers) back in 2017.23
This results in a wide variety of measures providing (as far as possible) clear rules and thus guidance for the development and use of autonomous driving, including the following points:
- Continuous review and adaptation of German road traffic law to technological progress,
- Development of policies and guidelines on data protection (and thus also in the broadest sense, protection of privacy etc.) and security,
- The investigation and discussion of dilemmatic accident scenarios (see the trolley dilemma).
- Development and promotion of broad public acceptance, for example through targeted social dialog, as no one should be forced to use autonomous driving systems.
- Work on the international standardization of automated and connected systems will continue based on these ethical guidelines to enable and promote the safe, cross-border use of autonomous driving technology.
In Germany, for example, the so-called "Law on Autonomous Driving" was passed in 2021, which regulates the legal framework for the future use of autonomous driving (in Germany) until there are international legal standards at EU or UN-ECE level, for example.24 Since then, the legal framework has been continuously developed both nationally and internationally, but the definition of the legal framework is closely linked to ethical problems and issues.25
According to Audi AG's study on the ethical aspects of autonomous driving, autonomous vehicles are only ethically justifiable if the systems cause fewer accidents than human drivers.26
To get a feel for what the statistical benchmark for the goal of "fewer accidents" is, you can look at the current figures and data. According to the Federal Statistical Office, there were an average of 8 deaths and 1,004 injuries per day in road traffic in Germany in 2023.27 According to the World Health Organization (WHO), around 1.2 million people died in road accidents worldwide in 2021 - this corresponds to 15 road deaths per 100,000 people.28
So, if the system behind autonomous driving were able to save even one human life by preventing an accident (compared to human drivers), this claim would be satisfied.
But if we have to assume that it could take decades before there are only autonomous vehicles on the roads across the board, we will consequently have a mix of human drivers and corresponding autonomous systems on the roads in the meantime. But this leads to the question of how the system should then react and decide in the event of an unavoidable accident?
The basic principle is that the system must continue to make no distinction between people in terms of characteristics such as age, gender, physical or mental constitution. And the following must apply in principle: One human life should never be "set off" against another. However, this is where the discussion about a fundamental ethical dilemma begins, which is discussed particularly in the context of autonomous driving using the example of the so-called trolley dilemma (seeFigure5).29
Illustrations are not included in the reading sample
Figure 5 : The trolley dilemma
Source: Own creation (2024) based on (Rahwan 2020)
The original trolley dilemma (with scenario 1) was created by the English philosopher Philippa Foot (a student of politics, economics and philosophy at Oxford University) and was presented in the Oxford Review in 1967.30 Since then, it has been expanded and modified several times. Today's modern trolley dilemma is based on a thought experiment that is differentiated into three scenarios and forces the respondent to differentiate and choose between accepting and (own) instrumentalization:31
- In the first scenario of the trolley dilemma, someone can change the points and steer the trolley onto a sidetrack. The person working there will die and the five people on the main track will be saved.
- In the second scenario, the siding makes a loop back to the main track, on which five people are working. The switching of the points leads to the death of the person working on the siding. However, his body prevents the wagon from rolling back onto the main track. In contrast to the first scenario, the death of the individual is not only accepted, but is necessary to save the other five.
- In the third scenario, a large man can be pushed off a footbridge onto the tracks, his body holding up the train car and saving five other people. Here, too, the death of the individual is not only accepted, but is necessary to save the lives of the others.
When reading this, everyone will probably find themselves critically reflecting on their own decision-making behavior. But how does this decision affect society? What moral principles should we follow?
According to an international study of 70,000 participants in 42 countries, the willingness to sacrifice one person to save several differences from country to country. The differences between Western and Asian countries were particularly striking. For example, 82% of respondents from Germany would approve of the scenario of sacrificing one person to save several, which is similar in most Western countries. This can be attributed to a uniform set of values. In some East Asian countries, such as China, only 58% would sacrifice one person to save several others. It can be deduced from the findings of this study that the decision or answer to the question cannot be answered across the board but is also based on the value system established in the respective cultural circles.32
In the ethical debate about new technologies such as AI, the question of what moral autonomy humans have been playing an increasingly important role. The term autonomy is derived from ancient Greek and literally means 'self-legality', which refers to the state of self-determination or freedom of choice. In German society, the understanding of the concept of autonomy is characterized by Kantian philosophy, which is based on the following premise: "(The) autonomy of the will is the nature of the will, whereby it is a law unto itself (independent of all the nature of the objects of the will). The principle of autonomy is, therefore, not to choose otherwise than in such a way that the maxims of one's choice are at the same time comprehended in the same will as a general law.33
In connection with autonomous driving, however, the assumption of human decisions by driverless systems also means a restriction of the freedom of choice of humans, who would otherwise drive the vehicle themselves, for example. As a rule, this does not have a direct impact on ethical issues, because if an autonomous system decides on the fastest or most attractive route, it is not itself acting ethically. However, the trolley dilemma explained above illustrates that an autonomous vehicle or its system must make decisions that have ethical consequences.
Such dilemma situations in connection with autonomous vehicles are basically characterized by the fact that an autonomous vehicle, and no longer a human being, is faced with the decision between two evils.
This is illustrated by a specific example from the German Ethics Commission of the BMVI on connected and autonomous driving: "The driver of a car drives along a road on a slope. The fully automated car recognizes that several children are playing on the road. An autonomous driver would now have the choice of taking his own life by driving over the cliff or accepting the death of the children by driving towards the children playing in the road. In a fully automated car, the programmer or the self-learning machine would have to decide how to handle this situation. [34]
This very concrete example makes it clear that it is not only necessary to answer the question of weighing up human lives, but also whether decisions of this scope should not already lie with the programmer or developer of the system (or the driver himself).35
However, if the responsibility for decision-making is shifted to the developer or the state in the form of appropriate legislation or regulation, there is a risk that the moral autonomy of the individual will be severely undermined or restricted.
The consequence of over-regulation is problematic in many respects, and the transfer of powers to machines must therefore be carefully weighed up. On the one hand, there is the threat of state paternalism, in which a "correct" ethical course of action for solving ethical dilemmas is prescribed externally; on the other hand, it contradicts the world view of humanism and human dignity, in which the individual is at the center of the approach.36
Another legal and ethical challenge in this context is liability in the event of accidents, which will occur particularly in the transition phase in which autonomous systems and human drivers will be on the roads at the same time. The issue is closely linked to the moral self-determination of humans, as (according to the current legal situation) only individuals with clearly recognizable decision-making powers can be liable.
The importance of ethical guidelines unfolds in two different ways: On the one hand, it signals that ethics must be an active part of technical development and must not be limited to purely ex-post considerations. On the other hand, general procedural standards are the first step towards clarifying ethical and legal responsibilities and thus also responsibilities (e.g. of developers or drivers).37
All in all, any number of thought experiments can be opened, such as the question of what role personal rights and data protection play in the context of autonomous driving or what abuse scenarios are possible through the hacking and manipulation of autonomous vehicles. And, of course, what responsibility lies with the programmers and developers of this technological innovation.38
The question that ultimately remains is how to take all these aspects into account and integrate them into the technical development process so that the resulting problems and challenges do not have to be dealt with ex-post. To address this question, the next step is to take stock of development models in the automotive industry (see chapter 2.3).
Development in the automotive sector is based on the product development process (PDP) and the underlying development processes and models. The development of series or platforms, for example, is realized as a project.39
Common practice in vehicle development is that the PDP is divided into phases, which can be differentiated into sub-phases in the actual product development. Within product development, there are corresponding development models according to which technical development then takes place. Figure6 visualizes the relationship between the PDP and the development models.
Illustrations are not included in the reading sample
Figure 6 : The product development process and underlying development models
Source: Own creation (2024) based on (Heimes et al. 2024, p. 342)
The selection of the V-model, Systems Engineering and ASPICE development models presented is based on their prevalence as (de facto) industry standards, some of which serve to fulfil regulatory requirements (e.g. DIN or ISO).
These three development models are presented and explained below in chapters 2.3.1 to 2.3.3.
The V-model is a process model in project management for the development of complex systems. It is listed here because it is widely used in the automotive industry and is considered the most widely adapted model in the field of technical vehicle development.
This process model was developed in 1979 by the American Barry W. Boehm and has been used in software development ever since. The V-model is based on a linear approach, which is derived from the waterfall model, which is also linear.40 In the context of project management, linear means that all sub-steps of a model are carried out and completed one after the other and individually. The core of the V-model consists of the division into specification (left-hand side) and verification (right-hand side).
The form of the model is derived from these two areas, which was also decisive for the naming of the process model.41 These two phases are also referred to in literature as the analytical and synthetic phases. The analytical phase describes the decomposition of the requirements, whereas the synthetic phase describes the assembly of the individual components.42
Figure7 visualizes the V-model and was supplemented by the author with corresponding processes (grey), which are run through continuously and in parallel to the actual process of the model (green).
Illustrations are not included in the reading sample
Figure7 : The V-model
Source: Own illustration (2024) based on (Bauer et al. 2022; Technical Committee 4.10 2022, 18 ff.)
The left-hand side of the V-model is primarily concerned with the requirements and specifications of the system. A top-down approach is followed, in which relevant information is first collected through a requirements analysis and the objectives of the project are defined. Further down the left-hand side, the requirements become more detailed, as does the project planning. Next, the system design and system architecture are drafted during the project. The overall picture of the target system is considered. Once the system architecture has been determined, individual software components are designed in the low-level design and finally the exact implementation of all components is planned. As soon as the left-hand side of the V-model has been completed, the development teams take over the actual implementation of all defined software components.43
Implementation is followed by verification. In contrast to the left-hand side, the right-hand side works bottom-up. The V-model consists of different levels, which are represented in the graphical V-shape. Each step in the requirements phase described above corresponds to such a level. Implementation takes place on the right-hand side of the V-model and proceeds from bottom to top. The software or system should then be tested at each level. First, the implemented components are tested with component tests, whereby the components are considered in isolation from each other. Integration tests then focus on the interaction between the components. Among other things, interfaces and data exchange are tested. Finally, the entire system is verified and validated.44
In the context of quality management, a distinction must be made between validation and verification. Validation involves checking the extent to which the specifications meet user requirements. In contrast, verification on the lower levels of the V-model is the comparison of the implementation with the requirements derived from user requests. When going through the right-hand side of, various tests must be taken into account and scheduled in line with the requirements.45
However, this procedure assumes that a complete specification exists. It is therefore possible that certain requirements that were not known in the model on the left at the beginning may not be considered during the project - and therefore aspects that are relevant for the users of the system.46
For this reason, adjustments and customizations are often made to individual projects or even more to the system and the underlying software in real application examples. In the automotive industry, for example, individual steps are often repeated incrementally and thus the V-model is run through several times.
Adaptations of the model have led to various further developments over time. For example, the V-Modell XT, the W-Modell and the inc-V-Modell were created. The Association of German Engineers (VDI ) has also adapted the V-model for the development of mechatronic systems and recorded it in the VDI/VDE 2206 guideline.47
Finally, it should be mentioned that there is no direct mention of ethics or ethical responsibility or similar within the V-model.
One definition of systems engineering (SE) is as follows: "Systems Engineering is a development methodology for handling the growing complexity of multidisciplinary technical systems and reducing risks".48
It is therefore a methodical approach to overcoming various challenges in the development of technological products, for example due to the ever-increasing complexity of more and more software and the interactions between individual (sub-) systems. Increased interaction and integration therefore increase the complexity of systems and the determination of relationships between cause and effect becomes more demanding and multi-layered. As an example of this, the authors Gräßler and Oleff cite the determination of engine data from the manufacturer General Electric. Here, 150 million sensor data records are collected during a long-haul flight, transmitted immediately to several locations and evaluated in real time, which illustrates the complexity mentioned above once again.49
SE sees itself as a discipline that considers different systems from different areas of "engineering" and combines them with different methods. For example, software would be the subject of software engineering, and one method would be the V-model (see chapter 2.3.1 ). SE now combines these systems and methods from the various disciplines, allowing complex systems to be viewed and analyzed more comprehensively.50
The greatest advantage of SE for coping with complexity is systems thinking. As part of systems theory, which describes the relationship between subsystems and overall systems, systems thinking is used to structure all the conditions of an overall system. The aim is to divide the system into different subsystems and to define all requirements and aspects of this subsystem. This also leads to the necessity that the SE is an interplay of different disciplines, such as software, electrical or mechanical engineering. It therefore always forms the basis for systematic problem solving. The followingFigure8 visualizes the SE approach.51
Illustrations are not included in the reading sample
Figure 8 : The Systems Engineering V-model
Source: Own creation (2024) based on (Allouis et al. 2013, p. 2-3; UC Berkeley)
It is noticeable that both the V-model and the SE have many overlaps, as can be seen visually inFigure7 andFigure8. However, the biggest difference probably lies in the following three aspects:52
- systems thinking,
- the development methodology and
- the role of the system engineer.
The Systems Engineer in particular takes on a wide range of responsibilities and tasks in SE, which to a certain extent also includes empowering the company and its developers and development teams
Although the SE only occasionally mentions the aspect of ethics or ethical responsibility in literature, the role of the system engineer shows that there is an awareness of responsibility or accountability and the empowerment of developers in an organization.53
The last development model to be considered in this thesis is the so-called Automotive Software Process Improvement and Capability Determination (ASPICE). Strictly speaking, ASPICE is a process model that was developed specifically for the automotive industry and was initially based on ISO/IEC 15504 (today ISO/IEC 33001 is used as a reference). ASPICE was founded at the end of the 1990s by a conglomerate of OEMs such as Volkswagen and BMW and suppliers such as Bosch and Continental. ASPICE is about evaluating the capabilities of software development processes and was developed to meet the increasing demands on the quality and safety of software in vehicles.54
ASPICE was originally developed as an instrument to qualitatively assess suppliers and partners of OEMs and to certify a corresponding process maturity level by means of tests. This was intended to ensure that projects were realized in the best possible way in terms of time, quality and budget. The followingFigure9 provides an up-to-date overview of the process reference model according to ASPICE in the current version 4.0
Illustrations are not included in the reading sample
Figure 9 : Overview of the process reference model according to ASPICE 4.0
Source: Own compilation (2024) based on (VDA Working Group 13 2023, p. 15)
The aim of ASPICE is to have the quality of the processes (for each individual process or process group such as project management in accordance with MAN.3) checked and evaluated in accordance with a defined metric in order to achieve an ASPICE level and obtain certification.
However, ASPICE itself does not specify exactly what these processes should look like. It is important to note that it is not a company that is tested for the processes, but only a specific project. In practice, many OEMs require their suppliers to achieve at least ASPICE Level 2 certification in their projects over the course of the project. The followingFigure10 provides a simple understanding of the different levels of certification and the criteria that must be met:
Illustrations are not included in the reading sample
Figure 10 : ASPICE 4.0 - Certification level
Source: Own compilation (2024) based on (VDA Working Group 13 2023, p. 18-23)
Initially, ASPICE was based on the V-model, as this was the established development model in the industry at the time of development. Today, ASPICE has gained in importance, particularly because of Volkswagen's so-called Dieselgate, which emerged in 2015. OEMs want to use ASPICE to prevent the decision-making processes (in technical development) from being non-transparent, not properly documented and evaluated accordingly. For a better understanding: Dieselgate was about the fact that, based on the decision of a few developers and managers in development, software was deliberately changed in such a way that the pollutant values for nitrogen oxides (NOx) in the exhaust gas of diesel engines reached the legally prescribed limit values on paper, but did not do so in regular operation and instead emitted many times more nitrogen oxides.55
Interestingly, this manipulation did not lead to increased fuel consumption or similar, but customers felt lied to and therefore cheated, which created a problem of trust and tarnished the image of the Volkswagen brand for a long time. In addition to the violation of the law itself, this feeling of fraud has probably contributed to the media attention given to the issue, but few people are aware that other OEMs such as Opel, Fiat or Alfa-Romeo have also attracted attention through identical fraud.
This is the reason why ASPICE is no longer only used for external OEM projects (i.e. those in which a Tier 1 such as Bosch or Continental is commissioned with the development and supply of hardware and software) but is also used for the OEMs' internal projects. You could say that the basic idea is "trust is good, control is better" and ASPICE is therefore also about the aspect of transparency and responsibility in technical development.
The previous chapters1 and2 explained why ethical issues arise in the context of the development of technical innovations and what potential significance this may have for those involved in their development.
The next step is to answer the question of which ethical perspectives can be considered for the scope of this work. The ethical perspectives refer to different points of view and aspects of ethics.
Ethics does not offer blanket answers or recommendations for action but provides guidelines. These guidelines thus form the basis for the necessary ability to reflect in order to enable the ability to make judgments and shape actions with regard to ethical aspects and issues for the persons concerned.56 In simple terms, one could say that different ethical perspectives lead to different views and approaches and thus to different evaluation criteria (Figure 18 in the appendix for an illustration).
As the topic of developing autonomous driving is primarily driven by commercial enterprises, a distinction can also be made between different levels to which corresponding ethical perspectives (and thus guidelines) can be assigned (seeFigure11).
Illustrations are not included in the reading sample
Figure 11 : Three perspectives of the economic-ethical connection
Source: Own creation (2024) based on (Schmidt 2023a, p. 184)
The Meso level, for example, takes the view of a company, which is why corresponding guidelines have a top-down approach. This is supplemented by the micro level, which is concerned with the ethical view of everyone (e.g. the developer), which speaks in favor of a bottom-up approach. Within the macro level, we will also find legal requirements that affect all companies and institutions equally, as well as the companies and ultimately the people of the respective organizations.
In principle, there are many ethical perspectives that can be considered as a starting point. In the next step, established perspectives are therefore presented and evaluated. Based on this, reasons are given as to why not all perspectives can be used to answer the key question(s) of this work or why three specific perspectives were chosen (see chapter 3.1 ).
The selected normative-ethical perspectives of risk ethics, deontological ethics and consequentialist ethics are then presented and discussed in the following sub-chapters (see the chapters 3.2, 3.3 and 3.4). It is also discussed how these can be applied in terms of the combination of a top-down and bottom-up approach.
From these ethical perspectives, it then follows what responsibility exists for the individual developer, for example. The guidelines that are relevant and transferable for the presented technical development models of autonomous driving are then derived from this (see chapter 4).
In this chapter, a wide variety of ethical perspectives are presented by means of an overview, briefly explained and then reasons are given as to which of these are taken into account in the further course of this work.
It should be borne in mind that there are many ethical perspectives and not all of them can be fully considered here. For this reason, the ethical perspectives most frequently mentioned in the literature were selected and researched as part of this work.
A detailed overview of the ethical perspectives is presented Table 9 in the appendix. The ethical perspectives that emerged from the literature identified for this work were consolidated and divided into normative and descriptive ethical perspectives.
A direct comparison between the normative and descriptive perspectives of ethics shows that descriptive ethics is particularly concerned with the question "Why did someone act the way they did?" or "What caused someone to act the way they did?". In relation to the question posed in this paper, this would mean that we can retrospectively evaluate the motivation behind the actions of a developer, for example.
The choice therefore falls on the normative perspectives, as these make it possible to define a target state in terms of the aim of this work. The aim of normative ethics is to provide rules or guidelines on how we should act.57 This means that normative perspectives are very well suited to be considered as a binding component in the corresponding development models.
The ethical perspectives to be considered are risk ethics, deontological ethics and consequentialist ethics. Why this?
Risk ethics would confront both the individual and the organization or company with the question "What risks could occur, how do I avoid them as far as possible and how do I deal with these risks should they occur?".
This is a question that is consistent with the methodical approach to risk management, as anchored in COSO Enterprise Risk Management (COSO ERM:2017), the risk management standard ISO 31000:2009 and the quality management standard ISO 9001:2015, for example.58 This would probably make it quite easy to anchor a risk ethics component with general acceptance in the technical development models, as it represents a kind of preventive protection concept for both the developer and the organization. A more detailed examination and discussion of risk ethics can be found in the chapter 3.2.
Deontological ethics focuses on compliance with moral, ethical guidelines and rules as the motivation for an action and does not evaluate the result of an action. It could therefore be said, for example, that a programmer who has acted to the best of his knowledge and belief when deciding how the software should behave is behaving deontologically correctly, even if the result has led to an accident.
In principle, we must assume that no developer deliberately acts in such a way that their contribution to the development of autonomous driving systems ends in negative consequences. If we were to do this, it would be difficult to discuss which guidelines we can derive from deontological ethics as a kind of target state. Chapter 3.3 explains deontological ethics and discusses which aspects could be transferred to the models of technical development. Deontological ethics is therefore to a certain extent the direct opposite of consequentialist ethics.
The third and final ethical perspective considered in this paper is consequentialist ethics. Consequentialist ethics evaluates the result of the action and not the motivation. It can be said that a good result can also emerge from a bad intention and vice versa.59
Consequentialist ethics will therefore be discussed last, as - as can be seen from the name - the consequences (and not the motivation of the action) are in the foreground. For example, one could ask both developers and companies the question "Does the system work flawlessly and has it not proactively led to accidents and harmful situations?" and then show that everything has been done to ensure that the system of an autonomous vehicle works in the best possible way. Would it then be morally justifiable to accept the potential damage resulting from a still immature innovation if we thereby support the opportunity to (further) develop these technologies (up to a possible flawlessness)?
Particularly in the development of technologies, as explained in the previous example of Oppenheimer (see chapter 2.1), the scope of the consequences is often hardly fully foreseeable. Consequentialist ethics is therefore also presented in a sub-chapter (see chapter 3.4) and discussed in terms of how it can be used for the objectives of this thesis
None of the other ethical perspectives mentioned in Table 9 were selected, as these are hardly considered in the literature researched on autonomous driving.
The concept of risk is omnipresent in our society today and is discussed almost as a matter of course in connection with events and actions. The history of the concept of risk in this interpretation can be traced back to the 14th century. At that time, risk was associated with long-distance and maritime trade and the possible loss of trade goods, which is remarkable "[...] as it implicitly assumes the position of a rational actor who no longer regards the imponderables of economic activities as events to be accepted as fateful, but as (more or less) calculable uncertainties." [60]
The term risk is often used in a comprehensive sense to characterize decision-making situations in which a possible action ex ante (i.e. at the time of the decision) can lead to at least two different consequences, whereby ex post61 only one of these possible consequences can actually occur. In addition, the situation-related decision or action of an actor (e.g. developer or programmer) must be relevant either for the realization or for the type or extent of at least one of the consequences. The potential results of a risk situation described in this way, i.e. the possible consequences, can then be specified qualitatively (as benefit or damage) and, if necessary, also quantitatively (in terms of the level of benefit or the extent of damage). While risk identification and assessment deal with descriptive questions ("What is the case?"), the risk assessment phase is at the normative level ("Which decision or action is correct?").62
Put simply, the concept of risk is considered to be associated with a generally negative consequence in connection with the decision of at least one person.
In principle, it can be said that there are only a few and very different positions on risk ethics. For example, according to Schrader-Frechette's procedural risk ethics, risk assessments should include objective aspects as well as social and ethical values.63 This seems sensible when you consider that innovative technologies such as autonomous driving, for example, also depend on social acceptance.
For the purposes of this work, it therefore makes sense to consider the interaction between action (i.e. what one does or does not do) and reaction (the resulting risk). This enables each individual to become aware of the importance of their responsibility and the possibility of exerting influence. Risk can thus be addressed from both a "bottom-up" and "top-down" perspective, which leads to the question of how to discuss and evaluate risks and the scope of possible scenarios.
As soon as the risk of an (innovative) technology - such as nuclear energy, but possibly also the release of genetically modified organisms - includes catastrophic risks, the technology is not acceptable. Two main objections have been raised against the setting of a categorical upper damage limit: Firstly, although the alternatives are not without catastrophic risks, none of the available options would be selectable under this rule. For example, abandoning nuclear energy could also lead to catastrophic emergencies, e.g. systemic damage to the livelihoods of many developing countries due to the greenhouse effect. Secondly, the probabilities of disasters occurring should not be completely disregarded. Catastrophic damage scenarios are not only conceivable, but also an unfortunate reality for many formerly innovative or already established and conventional technologies, such as the catastrophic chemical accident in Seveso in the 1970s or the nuclear reactor accident in Chernobyl in the 1986s.64
But what is the best way to deal with risks and make them assessable in the context of risk ethics? The German philosopher Carl Friedrich Gethmann has proposed a solution to this problem by measuring the reasonableness of risks according to a "principle of pragmatic consistency". According to this principle, risks are considered reasonable to the extent that they correspond to the risk acceptance expressed in real-life actions. The willingness to take risks, which an actor reveals through his actions, should provide the yardstick for the risks that others may expect him to take - one could say: the risk to which I expose others is also the same risk to which others may expose me. This principle does justice to the requirement to take into account not only the benefits and harm arising from risky actions but also the risk attitudes of those affected, such as the particularly pronounced risk aversion in the case of nuclear power.65
We are thus talking about a kind of reasonableness of risks, which makes sense in itself, because in the reality of life we can never ask everyone directly and obtain their decision (e.g. "I understand the risk and am prepared to accept it"). And even if we could, it would be almost unrealistic for us to reach a 100% unambiguous decision (insofar as this is a benchmark for decision-making) due to people's different risk aversions, for example.
An important aspect that is closely linked to the reasonableness of risks, in addition to the probability of possible damage occurring, is the threat posed by a risk (and the threat posed before the risk occurs or the damage occurs as a result of the risk). Rather, this is a mostly subjective perception of individuals, which is, however, decisive for the overall decision on how to deal with risks. This can be illustrated particularly well using the example of insurance: insurance is not only evaluated according to the benefits in the event of a risk loss, but also according to the benefit of the security of being able to fall back on the insurance benefits in the event of a loss. You could say that risks are brought into a kind of cost-benefit ratio by means of insurance.
In the current engineering literature on risk assessment, there is a certain amount of regret that a considerable number of statistical deaths are sometimes accepted when allocating safety investments in order to appease the public's fears, which are considered unjustified (and therefore rather subjective), and to create acceptance. In the example of the nuclear power plant, measures to prevent a fatality are rated significantly higher than in the case of (statistically) significantly riskier participation in public road traffic and car traffic. For the individual or society, however, the threat effects and resulting fears, possible insecurity and loss of trust (in technologies) etc. must be taken just as seriously as psychological damage, which is difficult to measure , and taken into account in the risk assessment and evaluation as actual deaths and illnesses.66
Gethmann's concept of risk is based on the perspectives of probability and the extent of damage - both of which can also be found in the aforementioned standards for mathematically oriented risk assessment in risk management (see page 22). For Gethmann, it is important that the risk is a typified disaster and not an isolated perception of danger, for example by a person. Gethmann derives his "rational concept of risk" from this understanding.67
Gethmann explains the rational concept of risk using various steps and studies, which are summarized Table 10 in the appendix. The following aspects are particularly important for this paper:68
- Actively taking a risk vs. living under risk
A distinction is made here between a "given" and a "chosen" risk. A given risk (living under risk) is a kind of omnipresent risk, as it affects everyone who uses public transport, for example. We socially accept risks and take them for granted. For example, even if I am just going for a walk, I am a passive road user and could theoretically be run over by a bus or car. This could therefore be described as an accepted passive risk that we do not think about directly in our daily lives. In contrast, a chosen risk (actively taking a risk) is, for example, when I get into a car and drive it myself. I am also aware that I could cause an accident due to my own mistakes or similar. In doing so, I am actively taking a risk and could therefore be a risk for passive road users.
- Standard risks vs. non-standard risks
With standard risks, there is an identifiable agent of an action - i.e. the person who does something or performs an action under risk - and an identifiable party affected by the consequences of the action - i.e. those who, in case of doubt, bear the negative consequences of a risk - this does not always have to be the actor performing an action. Non-standard risks refer to special circumstances such as "nature" or "society" etc. as agents or affected parties. Only standard risks can be used for a rational risk assessment because non-standard risks are comparable to the aspect of "living under risk".
- Undesirability
Here, too, people are assumed to be fundamentally rational. We assume that when choosing an action, we never choose risk for its own sake but rather accept the undesirable side effects in order to achieve a different advantage or benefit. For example, we accept that we must use public transport for our individual mobility and accept that this involves risks such as accidents. However, we use public transport to get to work, to pursue our leisure activities, etc. and not because we want to have an accident.
Why should these points in particular be emphasized? All these points can be discussed in the form of questions from the perspective of a developer as well as from the perspective of an organization. For example, is it a given or a chosen risk if I allow myself to be driven by an autonomous vehicle? These points therefore provide an ideal starting point for considering them as part of the discussion of guidelines and later transferring them to the development models.
Finally, when evaluating the sources identified in the course of the literature research for this thesis, three further criteria emerged that can be used to fundamentally assess risks in the sense of risk ethics in practice and, above all, in the reality of life. These are the Bayes criterion, the minimax criterion and the Hurwicz criterion, which are briefly explained below.69
- Bayes criterion
The basic model of Bayes' criterion is based on a decision theory that assumes that a subjective probability function is given over the set of circumstances, which provides precise information about how likely the person acts considers the occurrence of each circumstance relevant to the decision to be.It is also assumed that a second function, the so-called (subjective) utility function, is given, which assigns a (real) number to each possible consequence that expresses the subjective evaluation of the respective consequence. These two additional pieces of information about the decision situation then make it possible to calculate the expected utility value for each alternative action. As a result, Bayes' criterion then demands that the action with the highest expected utility value be chosen. From the perspective of rationality theory, it should also be noted that this decision rule also provides a method for placing all actions in a preference order: The options for action are arranged in the rank or priority order of their utility. According to Bayes' criterion, the most rational action is the one that maximizes the expected value of the subjective benefit. If this approach is applied to a company's guidelines, for example, the organization would have to create an environment and provide the means or opportunities to maximize the benefits of individual activities or measures.
- Minimax criterion
The minimax criterion is probably the most prominent decision criterion for dealing with uncertain situations. One reason for this probably lies in the fact that it has been taken up by such prominent authors as Hans Jonas and John Rawls and applied more or less directly in their work. The strategy of avoiding the greatest possible evil is decisive for the naming of this criterion: in order to achieve this goal, a two-step procedure is followed. First, for each event that can influence the realization of the consequences of choosing an alternative, the alternative that promises the greatest benefit is determined.In the second step, the difference between each of the other alternatives and the maximum value is then calculated for each possible event. After this determination of the respective possible relative losses, the alternative can be determined for the decision situation under consideration that promises the lowest maximum loss regardless of the occurrence of one of the possible events - the least possible evil, so to speak. Based on this idea, every developer would, for example, have to deal with the possible risks of the system they are working on in order to identify and weigh up the possible risks. In accordance with the minimax principle, they would then have to work in such a way that, despite all the risks, only the least possible damage could occur.
- Hurwicz criterion
In 1951, the Nobel Prize winner and economist Leonid Hurwicz proposed an approach for a new criterion that combines the advantages of the maximax and maximin criteria. The starting point for his considerations was the question of whether, in some cases where, for example, no catastrophes are to be expected, it would not be more sensible to also take into account what positive things could be gained from a decision in favor of a particular alternative. The imbalance in the exclusive consideration of the possible negative consequences or the consequences of the occurrence of risks also conceals the potential of the positive consequences that could arise from the possible decision alternatives. The Hurwicz criterion is therefore based on the fact that the best and worst possible consequences must be determined and then weighted. This criterion can also be mapped mathematically in order to ultimately derive a rational basis for decision-making that evaluates harm and benefit in relation to each other. In the context of autonomous driving, this would mean that although the risk of errors in development and possible fatalities as a result of a system failure would be critical and should be avoided under all possible circumstances, the prospect of a possible complete reduction in traffic accidents and fatalities would have a significantly higher benefit. According to the Hurwicz criterion, it would therefore be desirable to press ahead with the development of autonomous driving, as the added value outweighs the potential harm
Some possibilities can be derived from the positions of risk ethics presented and discussed, which can be transferred to the development models of the automotive industry. For this reason, the consideration of risk ethics is concluded with a summary of the advantages and disadvantages using the followingTable 2.
Table 2 : Risk ethics - overview of pros and cons
Illustrations are not included in the reading sample
Source: Own compilation (2024)
According to deontological ethics, ethically correct actions should generally comply with a moral norm. Therefore, the correctness of an action is assessed based on the motivation of the action itself and not on the consequences that the action entails. An example of this is the prohibition of lying, regardless of the consequences of lying. But what exactly is morality
At its core, the Stanford Encyclopedia of Philosophy describes morality as a set of norms and principles that determine our (individual) actions in relation to one another and to which we attribute a special meaning. Morality is based on a fundamental value or represents a value in itself. Such a moral value could be one's own integrity or, as mentioned above, honesty. Moral theories therefore attempt to provide corresponding criteria for the assessment of actions.70
In addition, moral theories can also provide procedures for decision-making, which can be used to determine the right action or to set conditions for morally appropriate practical considerations. However, since moral theories provide an explanatory framework, they help us to recognize connections between criteria and decision-making procedures and offer other forms of systematization.
The term "common sense" is used colloquially to describe a form of judgment formation that we experience on a daily basis without direct or active comparison with rule systems, etc. As humans, we therefore often have to act and decide on the basis of a kind of spontaneous or intuitive assumption. But how can it be that we as humans rely on this "common sense" when making judgments when we are supposedly acting spontaneously and/or intuitively?
This phenomenon leads to the theory of "common-sense morality", which is based on moral judgments, intuitions or principles. This theory is important in explaining how we as humans understand the principles of morality. The characteristic of common-sense morality is determined by human, normal reactions to cases, which in turn suggest normative principles or insights.
For example, a frequently mentioned feature of this theory is the self/other asymmetry in morality, which manifests itself in various ways in our intuitive responses. For example, we often distinguish between morality and prudence, thinking that morality concerns our interactions with others, whereas prudence is concerned with the welfare of the individual from their point of view. But if we assume that we act morally based on common-sense theory, then this should also mean that we are not dealing with wisdom, but that we apply the same moral value to ourselves as a standard towards other people. Therefore, when we act according to "common sense", we decide according to corresponding moral standards that we consider valuable for ourselves and are also valuable for third parties. In a sense, we assume individual moral responsibility for our actions.
This theory can be illustrated particularly well using the example of the trolley dilemma explained above (see chapter 2.2.2 ). There, we would probably intuitively say, and perhaps necessarily come to the conclusion, that we kill one person in order to save five people. In other situations, however, we would very quickly come to contradictions. An example of this is the thought experiment of organ transplantation, where a doctor would have to kill a patient in order to save five other patients with his organs. Here we would intuitively say that actively killing a patient is worse than simply letting them die passively. In the sense of deontological ethics, the intention, the motivation of the action, plays the decisive role.
A central representative of deontological moral theories is the German philosopher Immanuel Kant, who defined the so-called Categorical Imperative ("Act only according to that maxim which you can at the same time will to become a universal law") as the basic principle for the moral duties of human beings.71 In simple terms, one could say that the fundamental idea of Kantian ethics is the idea that autonomous actors are subject to moral duties, the content of which is determined by what moral actions are required or forbidden of other people.72
According to the idea of such basic principles, it is about a form of practical thinking that leads to corresponding actions. The dispositions, agreements and this directivity or normativity is expressed through "I should..." or "I ought to..." in meaning, which is indeed normative, but only rudimentarily moral. This in turn corresponds to the basic idea of Kant's categorical imperative.73
In the literature, there are different positions on this basic idea with regard to the rules for evaluating an action itself. For example, extreme action deontologists hold the view that every person must be able to experience each individual situation anew and then decide what is right and dutiful at that moment without referring to any rules. Similarly, it should not matter which decision leads to what degree of good versus bad consequences for ourselves and humanity (as a kind of counter-thesis to utilitarianism, where any increase in the general good is in the foreground and could therefore also be at the expense of individual people). To put it simply, one could say that action deontologists assume that individual moral judgments are fundamental and that all rules must be derived from them - and not vice versa.74
These approaches can be critically discussed, as a life without rules seems quite unrealistic for social coexistence. In addition, the burden that everyone would have to constantly spend time and energy every day to reassess every situation etc. would certainly present us with a perhaps impossible challenge in everyday life. Consequently, it follows that we tacitly acknowledge that we need appropriate rules for a simpler and better life (in the sense of a social contract).
The action deontologist is opposed, for example, by the rule deontologist, who says that the standard of behavior consists of one or more rules. These rules can be very concrete (such as the example of always telling the truth) or more like Sidgwick's principle of justice (simplified: "It cannot be right for A to treat B in a way in which it would be wrong for B to treat A"). - merely because they are two different persons, and without there being any difference between their qualities or the circumstances of their actions that can be adduced as a reasonable ground for treating them differently.").75
Perhaps it is the case that moral judgments or value judgments ultimately require reasons, and reasons cannot only apply to an individual case. For if the reason applies to a particular case, then it must also apply to similar cases. It follows from all this that deontological theories of action are not tenable in principle. If someone decides, judges or justifies something in the moral sphere, they are at least implicitly bringing rules or principles into play. Probably the most common objection here is that no rule can be found that does not equally allow for exceptions or excuses. Likewise, it is almost impossible to establish a system of rules that is free of conflicts between individual rules.76
But how do these deontological concepts of ethics fit in with autonomous driving?
Current literature in the field of autonomous driving discusses and accepts various ethical or moral norms or rules that could guide the underlying logic and thus decisions of autonomous driving systems. In addition to Kant's aforementioned categorical imperative and its prohibition of using humans as a mere means to an end, the rules mentioned include, for example, Asimov's three laws of robotics (e.g. the first law, which states that a robot must not injure a human), the obligation to avoid harm and collision, adherence to predetermined paths or virtual virtual boundaries, the protection of uninvolved road users, doing nothing, compliance with traffic laws and the obligation to drive with self-awareness. Such basic principles or maxims could be transferred to the functioning of autonomous vehicles in the form of priority rules in order to control system behavior with conditionalities or restrictions in a hierarchical order.77
However, if the decisive criterion of deontological ethics is a moral norm, this can only be valid if it is the general (or at least majority) consensus of a society. These criteria could then be, for example, the underlying intention for a certain action or its compatibility with a certain formal principle.
However, as already explained using the example of the trolley dilemma (see chapter 2.2.2 ), this depends on various factors such as the cultures and values of a society, which makes it difficult - if not impossible - to find a universally valid, global consensus. It can be assumed that almost everyone agrees that road deaths should be avoided, but on the basis of what moral norm and law can this be done? Starting from this question, contractarian deontologists look for principles that every individual would agree to in the form of a general social contract (as proposed by Rawls, for example) or that no individual could reasonably reject (as proposed by Scanlon, for example).78
As far as the topic of autonomous driving is concerned, however, approaches such as Kant's categorical imperative are too broad and unspecific to adopt this approach directly and apply it to a system. This is why scientists are moving towards developing rule-based ethical theories in the form of a cluster (e.g. "prohibited, permissible, obligatory actions") or a hierarchy of restrictions and rules (e.g. Asimov's Three Laws of Robotics) that are tailored to the programming of systems for autonomous driving. This should then enable the systems to behave desirably in dilemma situations
This means that the systems for autonomous driving will be given far-reaching decision-making powers in the future and we expect the system to make the right decisions even in dilemma situations. With regard to such a dilemmatic situation, it is worth considering that we would not (at least until now) expect a human driver to resolve this situation in a morally convincing manner. Rather, we would (so far) concede that he could not act morally in such a case. Unless he (the driver) had breached any duty of care, he would be held at least morally, perhaps even legally, blameless in such a scenario.79
In general, rule-based ethical theories could form the crucial basis for autonomous driving systems, as they represent a computerized structure for judgment, decision and reaction, which would be technically feasible. However, it could be argued that such rule-based approaches ignore context-specific information, such as the consideration of a deontological approach, which would be very complicated to implement within the systems from a technical and functional perspective. But what exactly is the problem here? This can be illustrated by the following scenario: The driving behavior, routing, etc. of a vehicle must be determined by the system, for which corresponding decision parameters must be set within the system. If the system now controls the behavior of the vehicle and makes decisions, then it must do so in a morally convincing way or at least in such a way that we humans, as moral actors, can understand and accept the decisions of the system.
However, if we do not reach such a consensus on comprehensibility and acceptance for the resolution of an ethically or morally critical situation, then a rule-based decision might not be a solution after all. Perhaps such a system would reveal dangerous behavior in order to strictly adhere to the given rules. As a result, it would only be possible to map reality to a limited extent.
This could also lead to lower social acceptance of the implementation of rule-based approaches, as moral decisions and obligations are not absolute (unambiguous) but depend on the context. Although rule-based approaches can be implemented very well in software (technical feasibility), the number of rules required, which could conflict with each other at will, represent an enormous complexity. The universality of such an approach would also be challenging to almost impossible, as every so-called borderline case80 would have to be covered in advance by a defined rule. Only the explainability and comprehensibility is possible through the representation of rules with different prioritization. Therefore, from a technical and functional point of view, the implementation of a deontic approach in the systems for autonomous driving is very complicated.81
In fact, it is questionable whether the principle of avoiding actively caused damage can be applied to automated processes at all, as the vehicle lacks the intention to take active action. The redirection and thus the shift from passive to active would already be pre-programmed and based on an intention that goes back to the developer.
The developer defines - albeit indirectly - active behavior through the programming of the system and does not only intervene in the event of an accident. Perhaps there is only one consequentialist decision-making option for the autonomous decision of such systems, as no distinction is made between "doing" and "not doing". About the programming of unavoidable accident situations, it remains questionable whether the killing of a person would still be a justifiable side effect of swerving if this were already determined in advance by the developer in the system.82
However, a similar problem also arises here if the programming for autonomous vehicles is brought forward and thus (pre-)decided, thereby laying the foundations for who should be killed in case of doubt. At this point, the human driver is likely to make his decision based on the numerical trade-off (one person vs. five people), as illustrated by the example of the trolley dilemma (see chapter2.2.2 ), as two negative duties collide.
The question is how the position of the developer is to be assessed in this situation, as the developer acts as an "outsider" in the sense of the situation of the accident itself, but nevertheless actively does so in the form of the decision specifications within the system. Accordingly, the developer is also obliged to decide. In this sense, he would not only redirect the car when it enters the accident situation but would have already decided beforehand. The developer would therefore actively determine the vehicle's course and would also kill in both cases. If one were to follow this line of reasoning, there would also be a collision of two negative duties here, in which the numerical balancing would have to be considered permissible. This in turn would lead to the problem that the application of consequentialist ethics (see chapter3.4 ), which is not compatible with our legal system, would be preferable for the system, as the intention is shifted forward from the time of the accident to the programming.83
Rights-based moral theories consider entitlement rights (for example, as a human being I automatically have human rights) as a normative basis. Accordingly, moral duties are primarily derived from a person's moral rights. Kant's dignity-based moral theory (e.g. human dignity), on the other hand, takes the normative concept of dignity as its basic category. Moral duties are therefore derived directly from the dignity of people. All these aspects of deontological ethics always involve the question of (individual) responsibility (towards other people or society).84
If we now ask the question of how an individual developer or even an organization or company could or should deal with these deontological concepts, various approaches arise (seeTable 3).
Table 3 : Deontological ethics - overview of pros and cons
Illustrations are not included in the reading sample
Source: Own compilation (2024)
In consequentialist theory, the moral evaluation of rightness or wrongness depends exclusively on how good or bad the consequence of the action is (compared to the other possible action alternatives).85 This means that the result and not the motivation of the action is decisive for the ethical or moral evaluation, because evil can also arise from a good intention (and vice versa).
In the context of autonomous driving, this approach has so far mainly been applied in the form of utilitarianism, which aims to maximize the overall good. This is often justified by the minimization of traffic accidents or fatalities (as also mentioned in chapter 2.2.2). In addition to this minimization of casualties, there are also other forms of benefit that can be used as evaluation criteria (in terms of costs and benefits).86 This is primarily based on the assumption of a certain rationality on the part of the actors when making decisions. But what does rational action mean?
Acting rationally means optimizing the causal consequences of one's own actions. This conceptual approach is quite widespread in our (Western) world and is not least the central, consequentialist criterion of the rational choice paradigm or the success factor in the application of decision and game theory methods in economics and social sciences. The consequentialist concept of rationality is particularly specific in its application to decisions under uncertainty, i.e. decisions whose consequences cannot be determined with certainty.87
This shows that there is obviously a general acceptance or approval of the aspects of rationality and consistency in society. How could this be used for the development of systems for autonomous driving?
For example, one could program a strictly consequentialist calculating system using AI, which takes into account certain human characteristics (life expectancy, life experience, merit to society, number of injured parties, etc.) as a target variable for the decision-making process. In principle, the system could prioritize the self-protection of vehicle occupants over the protection of other road users. If we expect such a system to decipher a problem that is morally unsolvable for us humans, the solution in this case will be based on a simple rule, such as "protecting others before protecting oneself" (or vice versa). But this hardly offers any room for moral considerations, from which the responsibility of the occupant or driver then also results, because the system ultimately acts according to defined criteria and rules.88
But what factors could then be taken into account by a developer as a basis for the decision?
In addition to the aspects mentioned above, other aspects such as the vehicle's energy consumption, passenger comfort, route planning to reach the navigation destination, physical damage and compliance with traffic regulations can also be included in the catalog of decision-making criteria. Based on such aspects, the developers of systems for autonomous driving would then have to develop algorithms that calculate the expected costs of the possible decision options and thus the consequences, so that ultimately the decision with the best trade-off result and thus the best cost-benefit ratio is selected.89
The aspect of benefit is particularly relevant here, as both the aspect of the least harm and the greatest gain of a rational decision could be useful.
The so-called utility theorem brings together the deliberation process of weighing up possible positive and negative consequences in a common evaluation measure that is neutral. For example, it does not stipulate that only the actor's own interests are decisive for the evaluation of consequences. According to this, however, altruists or utilitarians should be concerned with optimizing the consequences of their actions; in the case of the altruist with regard to the person whose welfare the actor has in mind and from the point of view of the utilitarian with regard to the sum of individual welfare. The utility-theoretical evaluation measure is therefore open to ethical principles and moral reasons for action, but these must be reflected in the evaluation of consequences.90
Utilitarianism is a prominent form of consequentialism that calls for the maximization of human welfare. The theory determines the ethical correctness of an action or norm solely on the basis of its (foreseeable) consequences by maximizing the expected overall benefit. Such a theory may allow and advocate the sacrifice of a single person in order to save a larger number of people in aggregate, which could then be a possible decision in terms of the trolley dilemma - but may we weigh lives against each other?
The Federal Court of Justice in Germany denied this in its ruling in 2006, which was based on a constitutional complaint regarding the Aviation Security Act, which sought to create scope for the shooting down of hijacked aircraft (following the events of September 11 in the USA). In the grounds for the ruling, it was argued that weighing up life against life would not be compatible with the Basic Law of the Federal Republic of Germany91 Although this may only be a legal clarification in Germany, it is representative of the fundamental dilemma as presented in the trolley dilemma.
Nevertheless, if one were to attempt to transfer such a calculation as an ethical theory to autonomous vehicles, what might this look like? Would it be an approach, a kind of cost function algorithm, which calculates the expected costs (i.e. the personal harm) for different possible options and the system then selects the option that causes the least cost in the event of harm? Could utilitarian approaches to autonomous driving systems therefore provide a better representation or consideration of reality, as many situational factors would be taken into account in the calculation? A kind of systemic cost function with the aim of maximizing benefits could then potentially also be applied in numerous traffic situations, albeit always depending on the precise definition of the benefit to be maximized. One could therefore say that the goal of general validity in the sense of universality would be given by such an approach92
Comparable to deontological ethics (see chapter 3.3), a possible consideration or integration of utilitarianism in the systems for autonomous driving would be an "elegant solution" for the developers, so to speak. Why this? With machines, we de facto always strive to maximize functions for the sake of optimization in order to increase efficiency (assuming technical feasibility). Ultimately, this would correspond to the basic logic of utilitarianism. However, from a technical point of view, calculating the benefits and burdens of all those involved in an accident represents a major challenge, the responsibility for implementing which lies with the developers themselves. In direct comparison to a purely deontological system for autonomous driving, which would act according to defined specifications, a utilitarian system that pursues continuous optimization would be less transparent and probably less predictable. This makes it necessary to fully understand the underlying logic of the system as to why a certain decision was made by the system. Ultimately, however, a central question remains as to whether it is permissible to actively restrict the benefit of an individual in order to achieve a greater benefit for other individuals. This also raises the question of how the developers of such systems should deal with this situation. Therefore, from an ethical as well as legal perspective, the implementation of a utilitarian approach the autonomous driving system requires that many targeted barriers are implemented in order to avoid undesirable behavior. In order to approach a compromise approach, scientists advocate a combination of deontological ethics (e.g. a kind of imperative to avoid collisions and personal harm) with utilitarian ethics, in the form of a relative weighing of costs and options.
Another aspect that technicians, lawyers and sociologists are discussing is the question of the extent to which autonomous actors could be held liable for incorrect behavior (and thus the consequences of the system's behavior). In this context, it is not only a question of moral responsibility, but also of legal responsibility. But first the question must be clarified: Are autonomous vehicles also autonomous actors? As things stand today, from both a legal and a social perspective, the majority of people will certainly be able to agree on a "no". The decisions made by autonomous vehicles and their systems are ultimately programmed by humans and (ideally) take into account technical, legal, economic and ethical aspects93
The standards for the rules and priorities for weighing up such a system should be drawn up and defined by developers of automated vehicles as well as, for example, lawyers and ethicists. It is imperative that these standards are generally transparent, for example to prevent OEMs from building excessive self-protection (e.g. protecting the occupant above all others) into the algorithms. The rules should consist of generally accepted concepts that correspond to a tacit social contract, such as that injury is always preferable to human death or that property damage is always preferable to human injury.
It is unlikely that a system developed by humans will ever cover a fully comprehensive set of rules for situation assessment and decision-making for all scenarios. This raises the question of what to do in the event of uncertainty regarding the correct decision. One approach could be as follows: In all scenarios which are not covered by rules or where rules conflict with each other or with possible consequences or where the ethical action is uncertain, the vehicle should (if possible) brake and swerve.94
In case of doubt, could a decision based on existing laws or the legal consequences be considered?
The US philosopher Peter Railton, for example, argues for his own theory of objective consequentialism, known as "sophisticated consequentialism". Here, the correctness of an action is a function of its actual consequences. In Railton's view, you can therefore be a good consequentialist without being alienated from your loved ones. Although they do not attempt to defend moral theory per se, other authors have also provided explanations of how agents can act on the basis of reasons - and thus perform morally valuable actions, even if these reasons are not explicitly articulated in their practical reasoning. Deontologists, on the other hand, argue that autonomous action does not necessarily involve the explicit invocation of, for example, the categorical imperative. In general, these steps are characterized by the notion that the justifying reasons are present in some form of the agent, but need not be explicitly articulated or invoked by the agent in order to act morally rightly95
With regard to the developer, the question therefore arises as to how they can enable the system to evaluate and decide accordingly. One solution to this could be to orientate oneself towards generally valid rules or morally established values of society. However, this leads to the question of what standard of validity and therefore binding force is given to such rules? Can the rules change or can it be sufficient to be guided by the currently valid rules, derived from the current society's understanding of values and morals?
In principle, it can be said that concretized rules - both moral and legal - are normative. This normativity follows (presumably and refutably) from the (moral) principle that the common good, whose fundamental content is given by the basic principles of practical reason, requires institutions to take measures. These measures serve to establish, apply and enforce certain rules on the relevant matters. These rules can originate from the organization or the company itself (e.g. governance and compliance guidelines), but also from the law.
Social factors make a positive rule of law a reason for action because of the desirability of authority as a means of securing the common good and the desirability of a state of affairs in which the applicable law and thus laws prevail and not individuals. Purely positive law, which is legally valid, is (presumably and contestably) valid and morally binding, has the moral form or meaning of a legal obligation if and because it takes its place in a scheme of practical reason.
Both the effectiveness of laws as a solution to coordination problems and promotion of the common good and the fairness of their observance depend on their being seen by both individuals and the administrators of the legal system as legally and morally justified, and thus as validly enacted law overriding all other reasons except competing moral obligations of greater importance.96
It can be argued here that laws and thus the legal framework can also be or are part of risk ethics, as risks must also be considered in conjunction with liability aspects, at least from the company's perspective. This results from the standards already mentioned, which are established in risk management.
Ultimately, the decision as to which aspects of consequentialist ethics are reflected in the systems for autonomous driving lies with the developers and their choice of appropriate evaluation and decision-making procedures based on rules.
In conclusion, it can be said that a consequentialist perspective is a useful addition to the development of systems for autonomous driving. The following summary in theTable 4 serves as a basis for deriving corresponding guidelines from consequentialist ethics at a later date.
Table 4 : Consequentialist ethics - overview of pros and cons
Illustrations are not included in the reading sample
Source: Own compilation (2024)
The ethical perspectives presented above must now be transferred in greater depth to the discussion on the development of systems for autonomous driving. This can be done more easily if they are discussed on the basis of a specific case study. This makes it possible to understand the challenges faced by developers in the field of autonomous driving in their daily work. And, above all, it can illustrate the practical benefits they can derive from taking ethics into account.
The discussion now turns to a specific example. The Adaptive Cruise Control (ACC) function was selected for this purpose. This can be described as a type of intelligent cruise control, which brakes and accelerates as required based on a target speed. This is a common system already available on the market (SAE level 1), the basic structure of which is easy to explain.
In order to narrow down the case study and thus be able to discuss it better, brief basic explanations from practice are defined below using the following three sections as premises:
- Architecture and functionality of the ACC
- Organizational framework conditions
- The challenge of system development
Architecture and functionality of the ACC
For a better understanding and comprehensibility of the discussion, the architecture and functionality of ACC will be briefly explained. ACC uses sensors (such as cameras, infrared or laser measurements, radar, etc.) to detect the vehicle's surroundings and automatically adjust its speed to maintain a safe distance from the vehicle in front. If necessary, the vehicle can then brake or accelerate accordingly up to the set target speed. The driver is always responsible for actively driving the vehicle (e.g. maintaining the lane) but is supported by this assistance system and thus relieved of some of the burden (for more information, seeTable 7 ). A more detailed explanation of the technical architecture is not provided.97
A modern ACC as a system can perform the following (partial) functions:
- Speed adjustment: ACC automatically adjusts the speed of the vehicle when the vehicle in front slows down.
- Distance adaptation: ACC can maintain a (definable) safety distance in conjunction with speed adaptation.
- Automatic start and stop: In addition to the typical application on the highway, many modern ACC systems are also able to stop the vehicle completely and then accelerate again as required, which means that they can also be used in traffic jams or city traffic.
- Traffic sign recognition: Traffic signs such as speed limits are recognized by the camera and the ACC system adjusts the set maximum speed according to the current traffic regulations.
If we now look further towards the higher SAE levels for autonomous driving, we quickly realize that the complexity in the interaction of this system with others in development increases disproportionately. One example: ACC in combination with Lane Centring Assistance (LCA). LCA, also known as Lane Keeping Assist, is a driver assistance system that uses a camera to detect the lane ahead and intervene independently with the aim of keeping the vehicle centered in the lane (SAE level 1 to 2, depending on the system). Here, two already complex systems are combined in such a way that they enable an even higher degree of automation of the driving function and thus added value for the driver.
The developer must always take into account that individual sensors may fail temporarily (e.g. dirt on the camera) or completely (e.g. technical defect). It must also be taken into account that the developer does not usually develop all (partial) functions of a system such as ACC.
Organizational framework conditions
At all OEMs, development is traditionally divided into different areas. These areas are divided into so-called domains, such as "Drive and Chassis" or "Infotainment". Within these domains, the employees develop and are responsible for the functions of the systems from the vehicle architecture located there. Cross-domain collaboration is therefore essential in order to understand the interaction of (sub-)functions of the systems in terms of a chain of effects.
One example: If you want to provide the customer or driver of the vehicle with a system such as ACC, different (sub-)functions have to work together. A developer who is responsible for the speed adaptation function must then coordinate with the developers for the traffic sign recognition function, for example, who may come from the "infotainment" domain, and so on. A change or modification to a single function can lead to a chain reaction of interactions, for example.
As a rule, (sub)scopes of development are outsourced to suppliers such as Bosch or Continental and then delivered accordingly and integrated into the (overall) system of the vehicle. For economic reasons, among others, it is not common practice for an OEM to be fully responsible for all development services itself.
Since direct and continuous communication between all developers within an OEM as well as with and between suppliers is a challenge in itself, the development of such systems is generally always based on the allocation of (partial) scopes.
A common instrument for establishing certain formal standards among all development partners are the so-called specifications. Such a specification includes all of the OEM's requirements as well as the resulting deliveries and services under the responsibility of the supplier. Specifications are drawn up for each development project and cover specific aspects, which is why there are also so-called cross-sectional specifications. The purpose of these cross-sectional specifications is to ensure that, in addition to the specific scopes, supplementary basic requirements are also taken into account and implemented by the supplier, which generally apply to all projects.
One example of this is Volkswagen AG's publicly accessible cross-sectional specification called "Group Basic Software Requirements: Basic requirements that the Volkswagen Group places on vehicle-installed and vehicle-related software/software-defined systems and their development processes". The regulations to be fulfilled for system and software development, for example, are explained there.98
This content must be taken into account by the suppliers' developers, but also written and maintained by an OEM's developers. These are just a few examples of the organizational framework conditions that a developer must know and implement, which illustrates the complexity.
The challenge of system development
As a developer, you are particularly faced with the challenge that there will be a mixed form of transportation in the foreseeable future. This means that there will be both human drivers and (partially) autonomous systems on the roads, including various types of vehicles with different degrees of automation. This has two consequences: Firstly, there will inevitably be incompatibilities in the technology and functioning of the vehicles and therefore also in the way the vehicles interact with each other. Secondly, the vehicles will have different levels of installed technology, which means that the accident risks and accident avoidance options will vary. Newer generations of systems should generally be more efficient and therefore safer than their predecessors.99
As a result, fundamental incompatibilities arise that have to do with the different ways in which autonomous systems and human drivers function as "agents" (i.e. entities that act according to certain fundamental goals and principles). Autonomous systems or vehicles should achieve their goals in such a way that they take into account an optimization goal (e.g. through sustainability by saving energy or fuel or saving travel time by reducing congestion) in addition to the actual transfer from the point of departure to the destination. Human drivers also act on the basis of principles and rules (e.g. traffic rules), but do not necessarily have to pursue optimization goals (e.g. driving fast may be fun for the driver, but also increases the risk of accidents for the driver and other road users). Simply put, a human agent cannot always be assumed to have rational driving behavior.100
Discussion - how ethics can help developers work better
We now bring the preceding simplified explanations into a discussion in the context of the development of the ACC from the perspective of a developer.
From a deontological point of view, can we fundamentally assume that the developer behaves with moral integrity as a human being? Yes, we can, and even in case of doubt we would have to assume this. In principle, no strong argument can be found to preemptively assume or presume that the developer is not behaving morally. This can be compared to the legal principle of the presumption of innocence (i.e. "innocent until proven guilty") - and it would also not be practicable to place every developer under general suspicion.101
As a human being, a developer cannot act free of values and morals, which have also become part of the development culture in the course of the agilization of software development, for example. Examples of this are the principles of the agile manifesto or the Vienna manifesto.102 Perhaps one could say that ethics for software developers must establish an ethos of technology development if one wants to ensure that this (the technology) is designed in a normatively desirable way.103 This is emblematic of a developer mentality that is characterized by personal responsibility as a requirement for one's own actions.
So if we assume that a developer behaves morally in the sense of deontological ethics, we can also assume that they follow their own standards of personal responsibility. This would mean that they would have to reflect on themselves in their work and ask questions such as "Have I understood the goal of my work correctly?" or "How can I regularly reflect on whether I am achieving the right results with my work?". If he asks himself these questions, the developer would inevitably have to enter into a dialog with the other developers. They could discuss their understanding of the objective, the challenges and problems as well as the solution approaches. This enables transparency and a shared understanding of what should be developed and how. It also creates an easy way for individual developers to compare their own and others' perceptions of problems and solutions. Of course, one could argue that this should be part of a qualitative development process for safety-critical functions and systems anyway, but this could very quickly lead to a reduction of problems to a purely statistical, economic evaluation (e.g. cost-benefit ratio in terms of risk ethics). We do not want to exclude this but rather supplement it with the deontological idea.
If this is transferred to the scenario where the developer must think about rules and behavioral regulations for safety aspects of his function (e.g. if a sensor is defective), this would remind him to take the greatest possible care. Why this? If I use ACC as a driver, then I must trust that the developer behind the system has acted in the interests of the driver, as if the developer himself were the driver. If you combine this with the preceding idea that the developer discusses his view as a solution to this scenario with those of the other developers, this has already created direct added value.
The consideration of deontological ethics helps each individual developer to gain a basic orientation in the perspective and approach for responsible development, which is driven by responsible action. The basic idea here is the key question: As a driver, what kind of awareness and responsibility would I want from the developers of my driver assistance system? This question goes beyond a formal framework that an organization can define and prescribe. A company can and should set its own values, but if a developer is intrinsically self-motivated and, in the best case, identifies with the company's values, this speaks for more satisfied employees as well as better work results. And this has a positive effect on the quality of development.
The developer could also formulate such a procedure with his colleagues in line with the procedure model for developing such a system and formalize it in a cross-sectional load specification. However, it is important that this procedure is not packaged in principles that are too rigid. This could lead to the developers (e.g. of a supplier) perceiving the specifications as too formalized and therefore rigid and inflexible, which can be problematic in the dynamic and complex nature of technology development. There will always be situations where appropriate situational decisions are required that cannot be (fully) covered by such approaches. It is important that the context is taken into account in every discussion so that no inappropriate or inefficient solutions arise. Otherwise, this would not create any added value - neither for the driver as the user of a system such as ACC, nor for the developer who built the system
It therefore makes sense to combine this approach with aspects of consequentialist ethics. As a developer, I can initially ask myself two questions when planning my system design: "What benefits should the system have for the driver?" and "What consequences could arise from possible errors in the system or as a result of its use?"
Irrespective of the deontological approach that one has a personal responsibility for one's active actions, the view of consequentialist ethics means that as a developer one creates an awareness of the results of one's work.
It is important to note that the consequentialist view is not intended to override the deontological view described above, but rather to build a bridge between the motivation for the action and the consequences of the decisions. There should be no competition between deontological and consequentialist ethics, but rather a basis for dialog to evaluate different perspectives and approaches for the developers.
Of course, the evaluation of consequences can be a very complex issue. For example, if a system such as the ACC has a fault and harms a person, what impact would this have on the social acceptance of such systems? Could this even lead to political and legal intervention through legislation?
It therefore makes sense to enter a dialog about the interactions between decisions and consequences at an early stage. This creates the basis for each developer to evaluate the impact of their decisions and actions. This enables the developer and the organization to prioritize measures in order to achieve the best possible results.
This makes it possible to establish a continuous review and adjustment of development processes based on the actual results of decisions made by individual developers through to the entire company, which goes beyond pure product optimization and instead represents a constantly learning and self-optimizing organization. The focus here is not on finding the culprits, but on constantly learning from mistakes - making this possible must be the primary goal of a company. This promotes a creative and open culture for the development of innovative technologies such as those for autonomous driving.
Consequentialist ethics also make it possible to consider the possibilities of the behavior of drivers (who do not always act rationally) and other road users (e.g. pedestrians) in various scenarios and thought experiments during the development process and design of systems for autonomous driving. Fallback mechanisms and safety measures can then be derived from this.
In conjunction with the agile methods and models of software development, it will be challenging for every developer to think through the consequences of short iterations and development cycles to the end and evaluate them accordingly, but this should rather be seen as an opportunity. The earlier you think about the consequences, the easier it should be to make preventative decisions and derive appropriate measures.
The next step is to include the aspects of risk ethics in the triad of selected perspectives. For the developer, acceptance of risk ethics and the question of why they should think about it will be almost intuitive. It has been mentioned that risk ethics overlaps very strongly with the established models of risk management. Only the formal and regulatory approaches to risk management are ultimately based on the approach of making risks statistically and monetarily assessable. Similar to the human factor and the consequence that people make mistakes and not every mistake can be avoided, this also fundamentally applies to risks
A practical problem for a developer dealing with risk ethics is the focus on the negative aspects and possible consequences. Every effort should be made to prevent these consequences from occurring, where people die due to errors in the system or a misunderstanding of how the system actually works (see the Tesla example from chapter1.1 ). But the developer should always focus on the positive benefits, the added value of the system. Many problems will arise on the way to SAE level 5 and thus general autonomous mobility, but reducing them to risks would lead to excessive caution in case of doubt, which could prevent innovation.
In terms of deontological ethics, the aim of a developer is not to harm other people with systems such as the ACC, but rather to create added value in terms of comfort and safety - and, with the vision of autonomous driving, to have as few traffic fatalities as possible in the future.
Risk ethics should therefore be understood as a kind of link between deontological and consequentialist ethics, which forms an interface with the norms and standards mentioned above as well as laws. In this way, the company can ensure that the three ethical perspectives are applied as tools and support for developers in a practical and beneficial way.
This discussion is very brief and highly simplified but is intended to illustrate in a striking way that, for example, decision questions from these three ethical perspectives create a practical benefit for the developer and thus added value for the development of technological innovations such as autonomous driving.
The use of ethics is therefore not a contradiction or conflict for technological development, but rather an elementary component of responsible innovation that benefits society.
The ethical perspectives selected for this work have been explained and discussed in the previous (sub)chapters. In addition, their relevance and thus importance for the development of autonomous driving. The resulting challenges for the developers of such systems, as well as possible perspectives and approaches to solutions, were also illustrated.
The next step in this work is to derive ethical guidelines that can be transferred and integrated into the selected development models (see chapter2.3 ). This is intended to create a basis for the future ethical technology development of autonomous driving, which is directly embedded in the selected development models of the automotive industry. In accordance with the objectives of this thesis, a concept of ethical guidelines should be created that meets the needs of the developers (bottom-up or micro level) as well as those of the organization or company as a whole (top-down or meso level) and can be applied as practically as possible in the selected development models of the automotive industry.
However, it is important to mention that there are also other approaches (as already described in part) to integrating ethical aspects into software development, for example, and thus also (indirectly) into the development of systems for autonomous driving. One of the most widespread approaches currently in practice is the Code of Conduct (CoC). In recent years, more than 100 of these so-called CoCs for the development of software have been developed by professional associations, companies, NGOs and scientists, for example. These CoCs essentially state accepted values such as participation, transparency and fairness, as well as the standard that the final decision-making power should always lie with people.104
One of the biggest challenges, however, is that there is no practical transfer of CoCs to the methods or models that are practiced and established as industry standards. It seems almost absurd to assume that a CoC can provide a tool for the ethical implementation of values in software that is suitable for all contexts. Such a generalized assumption is almost impossible, which is why specific solutions are required. Only then, for example, can an interpretation or a specific transfer into the lived reality of technical development by means of a form of instructions in the aforementioned development models also be practicable for the developers concerned.
In medicine, guidelines are systematically developed recommendations for decision-making on the appropriate treatment of diseases for both the doctor and the patient. The basic idea is to provide both parties with reliable decision-making aids that can be applied in practice.105
This is precisely where the target concept of this work comes in, to close this gap in the transfer with a focus on the automotive industry. In doing so, the three ethical perspectives explained are considered in aggregated form and transferred into practical guidelines - both for the individual developer and for the company as a whole.
A summarized overview of the target concept is presented below (see chapter 4.1), which is then explained in more detail (see chapter 0) and concludes with an explanation of the transfer of the target concept to the selected development models (see chapter 0).
The followingTable 5 provides a summarized overview of possible guidelines for the target concept. The target concept should be understood as a kind of framework and anchored within the organization as well as the existing development models.
Table 5 : Summary overview of the target concept
Illustrations are not included in the reading sample
Source: Own compilation (2024)
The approach here is that these guidelines can be taken into account and practiced by individual developers (bottom-up approach) as well as by the entire company (top-down approach). The company itself should also create a culture and an organizational framework that gives every developer the opportunity to understand the three ethical perspectives as a tool in everyday life and thus use them to their advantage.
In order to establish and implement these guidelines, appropriate measures are required. These measures would have to be defined individually depending on the current status of the organization, but some examples for a better understanding are described in the following chapter 0 .
The idea of the guidelines is to raise awareness of the importance and practical benefits of ethical perspectives for both individual developers and the company as a whole. Both the developer as an individual and the organization of a company as a whole can take a number of measures to implement the ethical approaches listed in Table 5 and support their consideration in the development process.
Below are some examples of measures for the implementation and sustainable establishment of the three ethical perspectives:
Deontological ethics: Possible measures for implementation
- Code of ethics and company guidelines: Companies should formulate a clear, binding code of ethics that sets out ethical principles for the work of developers. This code should reflect universal values such as data protection, human dignity and transparency and reflect or represent the corporate values.
- Regular ethical training: Training programs that educate developers to recognize ethical issues and incorporate them into their work can help integrate deontological principles into everyday life. This strengthens the awareness of one's own ethical responsibility.
- Ethical design: Developers should ensure that their products are designed ethically from the outset, for example by incorporating the protection of human rights and the avoidance of discrimination into the software design. This also includes compliance with data protection laws (e.g. GDPR).
- Internal control mechanisms: Companies should set up processes that monitor ethical compliance, e.g. through ethical audits or an ethics committee that is consulted when decisions are made.
Consequentialist ethics: Possible measures for implementation
- Cost-benefit analysis for development decisions: Developers and companies should introduce systematic cost-benefit analyses to ensure that every decision brings the greatest possible benefit for users, society and the environment. Tools such as SWOT analyses and impact assessments help to weigh up the long-term consequences of decisions.
- User-centered development: Developers should obtain feedback from users at an early stage and integrate it into the development process. This could be done through usability tests, user surveys or workshops to ensure that the product offers the greatest possible benefit.
- Sustainability initiatives: Companies could commit to placing sustainability at the heart of their product development. This includes reducing the ecological footprint of software (e.g. by saving energy) and implementing solutions that benefit the environment in the long term.
- Flexibility and adaptation to new information: Companies should use agile methods that enable developers to react quickly to new information and changes. A regular iteration cycle and feedback loops enable continuous optimization of the product based on new findings.
Risk ethics: Possible measures for implementation
- Risk management systems: Companies should introduce formalized risk management systems to identify, evaluate and minimize potential risks in the development process. This includes tools such as FMEA (Failure Mode and Effects Analysis) or risk matrices, which help to systematically assess risks
- Proactive risk analysis: Developers should look for potential security risks at every stage of the development process and implement preventative measures. This could be done through regular security checks, code reviews and penetration tests to identify and eliminate potential vulnerabilities.
- Communication platforms for risk sharing: Companies should set up platforms or communication channels where developers can share risks with stakeholders. This facilitates transparency in risk assessment and helps to communicate risks effectively, especially to non-technical partners or customers.
- Crisis management and contingency plans: Companies should develop strategies for dealing with unexpected problems, e.g. through contingency plans, in order to be able to react immediately to security gaps or system failures. These plans should be regularly updated and tested.
- Responsibility for damage: Companies should set up mechanisms that define clear responsibilities when products cause damage. This could be through insurance schemes, recall mechanisms or compensation schemes to take ethical responsibility for potential harm.
- Promote diversity in development teams: Diversity within development teams helps to ensure that different perspectives are included in the risk analysis. Companies could promote diversity through targeted recruiting measures and inclusive work cultures to improve the identification and mitigation of risks.
Finally, there are other additional measures that are generalist in nature:
- Ethical technology assessments (ethical impact assessments): Companies could conduct regular ethical assessments that review the potential impact of technologies on users and society.
- Open ethical dialogs: Creation of platforms on which ethical concerns can be freely expressed and discussed, both internally and externally. This strengthens the exchange between developers, management and stakeholders.
- Transparent communication and reporting: Companies could report publicly on their ethical guidelines and measures in order to build trust among stakeholders and users.
- A kind of ethics advisory board: internal employees from a wide range of company departments, together with external support (e.g. ethics consultants), could establish a regulatory process to jointly discuss and evaluate reported ethical aspects, issues and the handling of errors and risks. This would also promote respect for individuals and strengthen the collective sense of responsibility.
Through these measures, companies can ensure that ethical aspects are taken into account and actively implemented throughout the entire development process. They help to ensure that developers take ethical aspects into account in their daily work and make more informed decisions. This should create sustainable added value for the development of future technical innovations.
The final step explains how the target concept outlined so far can be integrated into the three selected development models of the automotive industry.
The lowest common denominator that can be found in all three models is the handling of requirements. Each model formalizes the handling of requirements, sometimes in different phases and levels of detail, but these are always at the beginning of the development project.
It would therefore make sense to integrate the guidelines and corresponding measures as a kind of cross-sectional function along all development models. This would create an indirect control function for the provision of appropriate resources from the company's perspective (top-down approach) as well as a support function for each developer (bottom-up approach). As an example, this function or area could be called "Ethical Requirements and Guidelines".
The following overview in theTable6 provides a better understanding of how and where this function (highlighted in black) could be positioned in the three models.
Table 6 : Transfer of the target concept to the selected development models of the automotive industry
Illustrations are not included in the reading sample
Source: Own compilation (2024)
The aim of this thesis was to find an answer to the question of what role ethics plays in selected models of technical development in the automotive industry using the example of autonomous driving. To this end, the three models were identified, described and it was shown that ethical aspects are only a partial or indirect component. This initial investigation revealed that ethics were not directly considered at the time this thesis was written (see chapter ).2.3
This confirms the initial thesis, which stated that ethics are not directly considered in the development models considered to be the industry standard. Based on this, suitable ethical perspectives were selected, explained and their relevance for the development of autonomous driving was discussed (see 3 ). It was shown that the three ethical perspectives identified are directly relevant to the discussion and solution of corresponding problems in the development of autonomous driving.
Based on a simplified abstract case study for a system in the field of autonomous driving, the practical benefits could then be demonstrated by considering the selected ethical perspectives. In connection with the normative question formulated in chapter1.2 , "How can an ethical perspective improve the models of technical development at car manufacturers in such a way that autonomous driving also becomes safer and better?", a target concept was then developed.
For this target concept, guidelines were derived from the ethical perspectives, which can be integrated into the development models in a practical manner for both individual developers and a company as a whole (see chapter4 ). This was done with the aim of increasing the quality of the development processes and the results of the development. This showed that ethical perspectives correspond very well in some cases with comparable approaches that are established, for example, through norms and standards. This should make it easier to implement these guidelines and the exemplary measures formulated for implementation in the organization and ultimately in the day-to-day work of developers.
However, the goal of developing a complete concept was only partially achieved. There is a greater need for appropriate tools for dealing with ethical perspectives, especially for developers (bottom-up approach), than is the case for a company. Only an initial basis could be developed for developers. There are already many models for companies, albeit generally purely mathematically and statistically based models (e.g. in the context of risk management), in which decisions are primarily driven by economic considerations. Expanding the established standards to include the aspects and advantages of ethical perspectives could create additional added value here
In terms of an outlook on how to proceed with the results and the findings derived from this work, there are two specific aspects:
In principle, the initial objective of the work was achieved, but the next step would be to carry out initial tests to implement these guidelines in the models described in order to evaluate their practical benefits. As the development of autonomous driving is expected to take many years, a test implementation would be realistic, but would require an appropriate development partner.
From the author's point of view, it is also clear that the discussion of a single ethical perspective in itself is already very complex and multi-layered. In order to further improve the quality of the guidelines, further theoretical elaboration should be carried out in addition to the practical test. Both approaches are potential topics for follow-up activities that build on this work.
In conclusion, it should be noted that the need for responsible technology development will increase in times of artificial intelligence and activities to develop autonomous driving, as this will integrate new actors into people's everyday lives. The behavior and decisions that these actors will make will influence society's perception and acceptance of such innovations.
Table 7 : Overview of SAE levels for autonomous driving (incl. comparison with classification according to BASt and NHTSA)
Illustrations are not included in the reading sample
Source: Own creation (2024) based on (SAE International 2021)
Illustrations are not included in the reading sample
Figure 15 : Product details for Tesla's Autopilot in the vehicle configurator (1/3)
Source: https: //www.tesla.com/de_de/modely/design#overview, accessed on 17.07.2024
Illustrations are not included in the reading sample
Figure 16 : Product details for Tesla's Autopilot in the vehicle configurator (2/3
Source: https: //www.tesla.com/de_de/modely/design#overview, accessed on 17.07.2024
Illustrations are not included in the reading sample
Figure 17 : Product details for Tesla's Autopilot in the vehicle configurator (3/3)
Source: https: //www.tesla.com/de_de/modely/design#overview, accessed on 17.07.2024
Table 8 : Potential benefits of autonomous driving
Illustrations are not included in the reading sample
Source: Own compilation (2024)
Illustrations are not included in the reading sample
Figure 18 : Ethical evaluation concepts
Source: Own compilation (2024) based on (Wagner 2003, p. 102)
Table 9 : Overview of ethical perspectives (detailed)
Illustrations are not included in the reading sample
Source: Own compilation (2024) based on
(Poszler et al. 2023 ; Geisslinger et al. 2021 ; Bendel 2019 ; Christoph et al. 2020)
Table 10 : Gethmann's rational concept of risk
Illustrations are not included in the reading sample
Source: Own compilation (2024) based on (Wagner 2003, p. 162-165)
2016 23rd Asia-Pacific Software Engineering Conference (APSEC) (2016). 2016 23rd Asia-Pacific Software Engineering Conference (APSEC). Hamilton, New Zealand, 06.12.2016 - 09.12.2016: IEEE.
ADAC (2020): Tesla advertising with Autopilot is misleading. Available online at https://www.adac.de/news/urteil-tesla-autopilot/, last updated on 05.08.2020, last checked on 13.07.2024.
Alexanderschatten; Dietmarwinkler; Erikgostischa-franta; Markusdemolsky; Stefanbiffl; Demolsky, Markus; Schatten, Alexander (2010): Best Practice Software Engineering: Spektrum Akademischer Verlag.
Allouis, Elie; Blake, Rick; Gunes-Lasnet, Sev; Jordan, Tony (2013): FACILITY FOR THE VERIFICATION & VALIDATION OF ROBOTICS & AUTONOMY FOR PLANETARY EXPLORATION. Available online at https://www.researchgate.net/profile/Elie-Allouis/publication/320585817_A_FACILITY_FOR_THE_VERIFICATION_VALIDATION_OF_ROBOTICS_AUTONOMY_FOR_PLANETARY_EXPLORATION/links/59ef1cfc458515ec0c79dad5/A-FACILITY-FOR-THE-VERIFICATION-VALIDATION-OF-ROBOTICS-AUTONOMY-FOR-PLANETARY-EXPLORATION.pdf?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6Il9kaXJlY3QiLCJwYWdlIjoicHVibGljYXRpb24ifX0, last updated on 05.2013, last checked on 01.08.2024.
Asimov, Isaac (2013): I, Robot. London: HarperVoyager.
Audi AG (2022): Audi study on autonomous driving: the ethical aspects. A key question regarding autonomous driving is: How can humans trust the machine? The Audi study "SocAIty" deals with this. In addition to technological and legal challenges, ethics is one of the focal points of the study. Available online at https://www.audi.com/de/innovation/future-technology/autonomous-driving/ethical-aspects.html#:~:text=Fakt%20ist%3A%20Laut%20Statistischem%20Bundesamt,ein%20Mensch%20auf%20der%20Stra%C3%9Fe., last updated on 27.04.2022, last checked on 22.07.2024.
Awad, Edmond; Dsouza, Sohan; Shariff, Azim; Rahwan, Iyad; Bonnefon, Jean-François (2020): Universals and variations in moral decisions made in 42 countries by 70,000 participants. In: Proceedings of the National Academy of Sciences of the United States of America 117 (5), pp. 2332-2337. DOI: 10.1073/pnas.1911517117.
BASt (2021): Self-driving cars - assisted, automated or autonomous? Available online at https://www.bast.de/DE/Presse/Mitteilungen/2021/06-2021.html, last updated on 11.03.2021, last checked on 22.07.2024.
Bauer, Bernhard; Ayache, Mouadh; Mulhem, Saleh; Nitzan, Meirav; Athavale, Jyotika; Buchty, Rainer; Berekovic, Mladen (2022): On the Dependability Lifecycle of Electrical/Electronic Product Development: The Dual-Cone V-Model. In: Computer 55 (9), pp. 99-106. DOI: 10.1109/MC.2022.3187810.
Beck, Kent; Grenning, James; Martin, Robert C.; Beedle, Mike; Highsmith, Jim; ... (2001): Manifesto for Agile Software Development. Available online at https://agilemanifesto.org/iso/de/manifesto.html, last updated 2001.
Bendel, Oliver (ed.) (2019): Handbook of machine ethics. 1st ed. 2019. Wiesbaden: Springer VS.
BMBF, BMWi, BMVI (2019): Action plan research for autonomous driving. An overarching research framework from the BMBF, BMWi and BMVI. Available online at https://www.bmbf.de/SharedDocs/Publikationen/de/bmbf/5/24688_Aktionsplan_Forschung_fuer_autonomes_Fahren.html, last updated on 07.07.2019, last checked on 21.07.2024.
BMDV (2021): Law on autonomous driving comes into force. Available online at https://bmdv.bund.de/SharedDocs/DE/Artikel/DG/gesetz-zum-autonomen-fahren.html, last updated on 27.07.2021, last checked on 14.07.2024.
BMVI (2017): Ethics Commission - Automated and connected driving. Available online at https://bmdv.bund.de/SharedDocs/DE/Publikationen/DG/bericht-der-ethik-kommission.pdf?__blob=publicationFile, last updated on 06.2017, last checked on 26.07.2024.
Bonk, Lawrence (2024): NHTSA concludes Tesla Autopilot investigation after linking the system to 14 deaths. The organization has opened a new inquiry into the efficacy of recent software fixes. Available online at https://www.engadget.com/nhtsa-concludes-tesla-autopilot-investigation-after-linking-the-system-to-14-deaths-161941746.html?guccounter=1, last updated on 26.04.2024, last checked on 13.07.2024.
Bosch Mobility: Adaptice cruise control for passenger cars. Available online at https://www.bosch-mobility.com/en/solutions/assistance-systems/adaptive-cruise-control/, last checked on 28.09.2024.
Federal Ministry of Transport and Digital Infrastructure (ed.) (2017): Federal Government action plan on the report of the Ethics Commission on Automated and Connected Driving (ethics rules for driving computers). BMDV. Available online at https://www.publikationen-bundesregierung.de/pp-de/publikationssuche/massnahmenplan-der-bundesregierung-zum-bericht-der-ethik-kommission-automatisiertes-und-vernetztes-fahren-ethik-regeln-fuer-fahrcomputer--736078, last updated on 06.09.2017, last checked on 21.07.2024.
BVerfG (2006): Judgment of the First Senate of February 15, 2006 - 1 BvR 357/05 -, para. 1-156. BVerfG. Available online at https://www.bundesverfassungsgericht.de/SharedDocs/Entscheidungen/DE/2006/02/rs20060215_1bvr035705.html, last updated on February 15, 2006, last checked on August 26, 2024.
Christoph, Lütge; Alexander, Kriebitz; Raphael, Max (2020): Ethical and legal challenges of autonomous driving. In: Klaus Mainzer (ed.): Philosophical Handbook of Artificial Intelligence. Wiesbaden: Springer Fachmedien Wiesbaden (Springer Reference Geisteswissenschaften), pp. 1-18.
Dambeck, Holger (2005): Hiroshima and Nagasaki - The late remorse of the nuclear pioneers. Available online at https://www.spiegel.de/wissenschaft/mensch/hiroshima-und-nagasaki-die-spaete-reue-der-atom-pioniere-a-368129.html, last updated on 06.08.2005, last checked on 01.08.2024.
Destatis (2024): Road traffic accident statistics 2023. Available online at https://www.destatis.de/DE/Presse/Pressemitteilungen/2024/07/PD24_261_46241.html, last updated on 05.07.2024, last checked on 22.07.2024.
Driver, Julia (2022): Moral Theory. The {Stanford} Encyclopedia of Philosophy. Edited by The Stanford Encyclopedia of Philosophy. Available online at https://plato.stanford.edu/archives/fall2022/entries/moral-theory/, last updated on 27.06.2022, last checked on 24.09.2024.
European Commission (2022): Autonomous driving: New rules for driver assistance systems come into force. Available online at https://germany.representation.ec.europa.eu/news/autonomes- fahren-neue-regeln-fur-fahrerassistenzsysteme-treten-kraft-2022-07-06_en, last updated on 06.06.2022, last checked on 22.07.2024.
European Union (2007): Charter of Fundamental Rights of the European Union. Article 48 - Presumption of innocence and rights of defense. European Union. Available online at https://fra.europa.eu/de/eu-charter/article/48-unschuldsvermutung-und-verteidigungsrechte#:~:text=Artikel%2048%20entspricht%20Artikel%206,Beweis%20ihrer%20Schuld%20als%20unschuldig., last updated on 14.12.2007, last checked on 27.09.2024.
European Union (2024): EU AI Act. Available online at https://artificialintelligenceact.eu/de/ai-act-explorer/, last updated on 19.04.2024, last checked on 14.07.2024.
European Parliament (2023): Artificial intelligence: opportunities and risks. Artificial intelligence (AI) is having an increasing impact on our lives. More about the opportunities and risks for security, democracy, companies and jobs. Available online at https://www.europarl.europa.eu/topics/de/article/20200918STO87404/kunstliche-intelligenz-chancen-und-risiken, last updated on 20.06.2023, last checked on 10.07.2024.
Finnis, John (2024): Natural law Theories. With the collaboration of Edward N. Zalta and Uri Nodelman. The Stanford Encyclopedia of Philosophy. Available online at https://plato.stanford.edu/entries/natural-law-theories/, last updated on 20.04.2024, last checked on 24.09.2024.
Frankena, William K. (2017): Ethics. An analytical introduction. 6th edition. Wiesbaden: Springer VS.
Frenz, Walter (2020): Handbook Industry 4.0: Law, Technology, Society. Berlin, Heidelberg: Springer Berlin Heidelberg.
Gartner: Hype Cycle. Available online at https://www.gartner.com/en/research/methodologies/gartner-hype-cycle, last checked on 14.07.2024.
Gartner (ed.) (2015): Gartner's 2015 Hype Cycle for Emerging Technologies. Available online at https://www.gartner.com/en/newsroom/press-releases/2015-08-18-gartners-2015-hype-cycle-for-emerging-technologies-identifies-the-computing-innovations-that-organizations-should-monitor, last updated on 18.08.2015, last checked on 22.07.2024.
Gartner; Perri, Lori (2024): Impact Radar for 2024. Available online at https://emt.gartnerweb.com/ngw/globalassets/en/articles/images/impact-radar-2024.jpg, last updated on 12.04.2024, last checked on 14.07.2024.
Geisslinger, Maximilian; Poszler, Franziska; Betz, Johannes; Lütge, Christoph; Lienkamp, Markus (2021): Autonomous Driving Ethics: from Trolley Problem to Ethics of Risk. In: Philos. Technol. 34 (4), pp. 1033-1055. DOI: 10.1007/s13347-021-00449-4.
Gethmann, Carl Friedrich (2023): Constructive Ethics. Berlin, Heidelberg: Springer Berlin Heidelberg (52).
Gleißner, Werner (2019): Risk management. Available online at https://wirtschaftslexikon.gabler.de/definition/risikomanagement-42454/version-371674, last updated on 17.10.2019, last checked on 10.08.2024.
Goodall, Noah J. (2014): Ethical Decision Making during Automated Vehicle Crashes. In: Transportation Research Record 2424 (1), pp. 58-65. DOI: 10.3141/2424-07.
Goodall, Noah J. (2016): Can You Program Ethics Into a Self-Driving Car? When self-driving cars kill, it's the code (and the coders) that will be put on trial. IEEE Spectrum. Available online at https://spectrum.ieee.org/can-you-program-ethics-into-a-selfdriving-car, last updated on 31.05.2016, last checked on 30.07.2024.
Gräßler, Iris; Oleff, Christian (2022): Systems Engineering. Understanding and industrial implementation. 1st ed. 2022. Berlin, Heidelberg: Springer Berlin Heidelberg; Springer Vieweg.
Grunwald, Armin; Simonidis-Puschmann, Melanie (2013): Handbook of Technology Ethics. Stuttgart: J.B. Metzler.
Hankins, Jonathan (2024): What Does "Responsible Innovation" Mean? A new movement tries to define engineers' roles and responsibilities in the innovation process. Available online at https://spectrum.ieee.org/what-does-responsible-innovation-mean, last updated on 24.06.2024, last checked on 01.08.2024.
Hawkins, Andrew J. (2024): Tesla's Autopilot and Full Self-Driving linked to hundreds of crashes, dozens of deaths. NHTSA found that Tesla's driver-assist features are insufficient at keeping drivers engaged in the task of driving, which can often have fatal results. Edited by The Verge. Available online at https://www.theverge.com/2024/4/26/24141361/tesla-autopilot-fsd-nhtsa-investigation-report-crash-death, last updated on 26.04.2024, last checked on 15.07.2024.
Heidbrink, Ludger; Langbehn (2017): Handbuch Verantwortung: Springer Fachmedien Wiesbaden.
Heimes, Heiner Hans; Kampker, Achim; Schmitt, Fabian; Demming, Michael (2024): Product development process. In: Achim Kampker and Heiner Hans Heimes (eds.): Electromobility. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 341-351.
Höhn, Reinhard; Höppner, Stephan (2008): The V-Modell XT: Springer Berlin Heidelberg.
Holsten, Lennart; Frank, Christian; Krüger, Jacob; Leich, Thomas (2023): Electrics/Electronics Platforms in the Automotive Industry: Challenges and Directions for Variant-Rich Systems Engineering. In: Myra Cohen, Thomas Thüm and Jacopo Mauro (eds.): Proceedings of the 17th International Working Conference on Variability Modeling of Software-Intensive Systems. VaMoS 2023: 17th International Working Conference on Variability Modeling of Software-Intensive Systems. Odense Denmark, 25 01 2023 27 01 2023. New York, NY, USA: ACM, pp. 50-59.
Kant, Immanuel (1962): Collected Writings / Logic. Physical Geography. Pädagogik, vol. 9: De Gruyter.
Karnouskos, Stamatis (2020): Self-Driving Car Acceptance and the Role of Ethics. In: IEEE Trans. Eng. Manage. 67 (2), PP. 252-265. DOI: 10.1109/TEM.2018.2877307.
Liu, Bohan; Zhang, He; Zhu, Saichun (2016): An Incremental V-Model Process for Automotive Development. In: 2016 23rd Asia-Pacific Software Engineering Conference (APSEC). 2016 23rd Asia-Pacific Software Engineering Conference (APSEC). Hamilton, New Zealand, 06.12.2016 - 09.12.2016: IEEE, pp. 225-232.
Löschnig, Nadine: "Das Trolley-Dilemma" - A treatment of the thought experiment in theory and practice with regard to its usefulness in philosophy lessons. Karl-Franzens-University Graz. Available online at https://unipub.uni-graz.at/obvugrhs/content/titleinfo/5099194?lang=de, last checked on 19.07.2024.
Maurer, Markus; Gerdes, J. Christian; Lenz, Barbara; Winner, Hermann (eds.) (2015): Autonomous Driving. Berlin, Heidelberg: Springer Berlin Heidelberg.
Messner, Hannah Maria (2018): Ethical and legal challenges of autonomous driving. The trolley problem and personal freedom of choice. Master's thesis. Paris Lodron University of Salzburg, Salzburg. Faculty of Law. Available online at https://eplus.uni-salzburg.at/Abschlussarbeiten/download/pdf/4981580, last checked on 04.08.2024.
Neuhäuser, Christian; Raters, Marie-Luise; Stoecker, Ralf (eds.) (2023): Handbook of Applied Ethics. Stuttgart: J.B. Metzler.
NHTSA: Automated Vehicles for Safety - The Road to Full Automation. Available online at https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety, last checked on 22.07.2024.
NHTSA (2024): Additional Information Regarding Investigation EA22002 : EA22002. Available online at https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf, last updated 04/25/2024, last checked 07/14/2024.
Nida-Rümelin, Julian (2012): Risk ethics.
Noé, Manfred (ed.) (2013): Innovation 2.0. Wiesbaden: Springer Fachmedien Wiesbaden.
Nyholm, Sven; Smids, Jilles (2020): Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic. In: Ethics Inf Technol 22 (4), pp. 335-344. DOI: 10.1007/s10676-018-9445-9.
Perri, Lori (2023): Gartner's Hype Cycle for Emerging Technologies 2023, Gartner. Available online at https://www.gartner.de/de/artikel/was-ist-neu-im-2023-gartner-hype-cycle-fuer-neue-technologien, last updated on 23.08.2023, last checked on 22.07.2024.
Petermann, Jan (2020): Five years of Dieselgate: After the crisis is before the crisis. Edited by Deutsche Welle. Available online at https://www.dw.com/de/f%C3%BCnf-jahre-dieselgate-nach-der-krise-ist-vor-der-krise/a-54972755, last updated on 18.09.2020, last checked on 02.08.2024.
Petra, Winzer (2013): Generic Systems Engineering: Springer Berlin Heidelberg.
Poszler, Franziska; Geisslinger, Maximilian; Betz, Johannes; Lütge, Christoph (2023): Applying ethical theories to the decision-making of self-driving vehicles: A systematic review and integration of the literature. In: Technology in Society 75, p. 102350. DOI: 10.1016/j.techsoc.2023.102350.
Pretschner, Alexander; Zuber, Niina; Gogoll, Jan; Kacianka, Severin; Nida-Rümelin, Julian (2021): Ethics in agile software development. In: Informatik Spektrum 44 (5), pp. 348-354. DOI: 10.1007/s00287-021-01390-8.
Rahwan, Iyad (2020): Sacrifice one person to save five? Sacrifice one person to save five? A study confronted 70,000 test subjects in 42 countries with moral dilemmas and found both similarities and differences. Max Planck Society. Available online at https://www.mpg.de/14384755/trolley-dilemma-international, last updated on 22.01.2020, last checked on 26.07.2024.
Rein, Andreas (2023): Tesla wins lawsuit after fatal accident. Ed. by ARD Los Angeles. Available online at https://www.tagesschau.de/wirtschaft/unternehmen/tesla-prozess-autonomes-fahren-100.html, last updated on 01.11.2023, last checked on 15.07.2024.
Reveland, Carla; Siggelkow, Pascal (2023): AI-generated disinformation on the rise. Russia's President Putin kneeling before China's head of state Xi or ex-US President Trump being arrested: AI-generated images have attracted a lot of attention. But how can you recognize them? ARD fact finder. Available online at https://www.tagesschau.de/faktenfinder/kontext/ki-desinformation-fakes-101.html, last updated on 31.03.2023, last checked on 11.07.2024.
SAE International (2021): Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE J3016. SAE International. Available online at https://www.sae.org/standards/content/j3016_202104/, last updated on 30.04.2021, last checked on 20.07.2024.
Scanlon, Thomas M. (2000): What we owe to each other. [Reprint]. Cambridge, Mass.: Belknap Press of Harvard Univ. Press.
Schäuffele, Jörg; Zurawka, Thomas (2016): Automotive software engineering. Efficient use of basics, processes, methods and tools. 6 ed. Wiesbaden: Springer Vieweg (ATZ/MTZ-Fachbuch, 2016: 1).
Schesswendter, Raimund (2021): Elon Musk: Autonomous driving more difficult than expected. The Tesla boss announced several times that the company's products would soon be able to drive themselves. Now he admits that he underestimated the task. Ed. by t3n. Available online at https://t3n.de/news/elon-musk-tesla-autonomes-fahren-schwieriger-fsd-autopilot-kompliziert-1389711/, last updated on 06.07.2021, last checked on 12.07.2024.
Schlüter, Nadine (2023): Generic Systems Engineering. A methodical approach to complexity management. 3rd edition 2023. Berlin: Springer Berlin; Springer Vieweg.
Schmidt, Christin (2023a): Graph Theory and Network Analysis: Springer Nature.
Schmidt, Thomas (2023b): Deontological ethics. In: Christian Neuhäuser, Marie-Luise Raters and Ralf Stoecker (eds.): Handbuch Angewandte Ethik. Stuttgart: J.B. Metzler, pp. 67-74.
Schomberg, Rene von (2011): Prospects for Technology Assessment in a Framework of Responsible Research and Innovation. In: SSRN Journal. DOI: 10.2139/ssrn.2439112.
Schuh, Günther; Graf, Leonie; Zeller, Paul; Scholz, Paul; Studerus, Bastian (2019): An industry in transition - Shaping technological change in the automotive industry.
Seiwert, Martin (2023): Musk's fairy tale of autonomous driving. Tesla boss Elon Musk has been promising autonomous driving cars for years. In reality, however, Tesla is now far behind in terms of technology, as a new study commissioned by WirtschaftsWoche shows. Available online at https://www.wiwo.de/my/unternehmen/auto/neue-studie-musks-maer-vom-autonomen-fahren/29171630.html, last updated on 26.05.2023, last checked on 13.07.2024.
Technical Committee 4.10, V.D.I. (2022): Development of mechatronic and cyber-physical systems. Available online at https://www.researchgate.net/publication/361832219_Development_of_mechatronic_and_cyber-physical_systems_Entwicklung_mechatronischer_und_cyber-physischer_Systeme.
Tesla: Autopilot functionality and full potential for autonomous driving. Available online at https://www.tesla.com/de_DE/support/autopilot, last checked on 14.07.2024.
Tiedemann, Paul (2023): Philosophical Foundations of Human Rights. Berlin, Heidelberg: Springer Berlin Heidelberg.
UC Berkeley (ed.): The Systems Engineering Process. Available online at https://connected-corridors.berkeley.edu/guiding-project-systems-engineering-process, last checked on 03.08.2024.
VDA; QMC (ed.): Automotive SPICE. Available online at https://vda-qmc.de/automotive-spice/, last checked on 02.08.2024.
VDA Working Group 13 (2023): Automotive SPICE - Process Reference Model & Process Assessment Model. VDA; QMC. Available online at https://vda-qmc.de/wp- content/uploads/2023/12/Automotive-SPICE-PAM-v40.pdf, last updated on 29.11.2023, last checked on 02.08.2024.
Volkswagen (2024): Group basic requirements for software. Basic requirements that the Volkswagen Group places on vehicle-installed and vehicle-related software/software-defined systems and their development processes. Technical development, cross-sectional load specification: LAH.893.909. Available online at https://www.volkswagen-kgas.com/presence/downloads/Konzern_Grundanforderungen_Software_LAH_893909_Version_43_BL410_Final_Homepage_2024_05_30.pdf, last updated on 31.05.2024, last checked on 27.09.2024.
Wagner, Bernd (2003): Prolegomena to an ethics of risk. Foundations, problems, criticism. Dissertation. Heinrich Heine University, Düsseldorf. Faculty of Philosophy. Available online at https://docserv.uni-duesseldorf.de/servlets/DerivateServlet/Derivate-2777/777.pdf, last checked on 15.08.2024.
Waymo: Self-Driving Car Technology for a Reliable Ride - Waymo Driver. Available online at https://waymo.com/waymo-driver/, last checked on 15.07.2024.
WHO (2023): Road traffic injuries 2023. Available online at https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries, last updated on 13.12.2023, last checked on 22.07.2024.
Research Service of the German Bundestag (2021): Brief information on medical guidelines. Available online at https://www.bundestag.de/resource/blob/876444/83593a354ba88614b34657f2ff82ed80/WD-9-094-21-pdf-data.pdf, last updated on 15.11.2021, last checked on 12.09.2024.
Zuber, Niina Marja Christine (2022): Ethics in software development. Ludwig-Maximilians-Universität München .
[...]
1 Cf. this paragraph Gartner and Perri 2024.
2 See European Parliament 2023.
3 See Reveland and Siggelkow 2023.
4 See European Union 2024.
5 Cf. Gartner.
6 Schesswendter 2021.
7 Cf. this paragraph Seiwert 2023 and Waymo.
8 Tesla.
9 ADAC 2020.
10 For this and the following paragraph, see Hawkins 2024 and Rein 2023
11 Cf. on this paragraph Bonk 2024 and NHTSA 2024.
12 See Noé 2013, p. 1.
13 Cf. this and the following paragraph Dambeck 2005.
14 Cf. Hankins 2024 (translated from English).
15 See (translated from English) Schomberg 2011, p. 9.
16 See Holsten et al. 2023, p. 50.
17 C.A.S.E is the general abbreviation for Connected, Autonomous, Shared and Electrified
18 See Schuh et al. 2019, p. 3-4.
19 For this and the following paragraph, see Poszler et al. 2023, p. 1.
20 Cf. SAE International 2021; BASt 2021; NHTSA.
21 For further information, see
22 Cf. BMBF, BMWi, BMVI 2019, 4 ff.
23 For this and the following paragraph, see Federal Ministry of Transport and Digital Infrastructure 2017, p. 1-7.
24 See BMDV 2021.
25 See European Commission 2022.
26 See Audi AG 2022.
27 Cf. Destatis 2024.
28 See WHO 2023, p. 4.
29 See Audi AG 2022.
30 Cf. Löschnig, pp. 16-22.
31 Cf. on this paragraph Rahwan 2020.
32 See Awad et al. 2020; Rahwan 2020...
33 For this and the following paragraph, see Christoph et al. 2020, pp. 1-17.
34 BMVI 2017, P. 16.
35 Cf. Goodall 2014, 2016.
36 For this and the following paragraph, see Christoph et al. 2020, p.
37 See Christoph et al. 2020, p. 17.
38 Cf. Maurer et al. 2015, 70 ff.
39 For this and the following paragraph, see Heimes et al. 2024, pp. 341-350.
40 See Gräßler and Oleff 2022, p. 156.
41 See 2016 23rd Asia-Pacific Software Engineering Conference (APSEC) 2016, pp. 225-232.
42 Cf. Alexanderschatten et al. 2010, p. 50.
43 Cf. on this paragraph Gräßler and Oleff 2022, p. 143.
44 See Schäuffele and Zurawka 2016, p. 152.
45 See Schäuffele and Zurawka 2016, p. 34.
46 See Schäuffele and Zurawka 2016, p. 30.
47 Cf. Höhn and Höppner 2008, p. 3 and Liu et al. 2016.
48 Gräßler and Oleff 2022, p. 15.
49 See Gräßler and Oleff 2022, p. 6.
50 See Schlüter 2023, p. 3-4.
51 Cf. Petra 2013, 1 ff.
52 For this and the following paragraph, see Gräßler and Oleff 2022, pp. 227-228.
53 See Schlüter 2023, p. 34.
54 See VDA and QMC.
55 See Petermann 2020.
56 Cf. Schmidt 2023a, pp. 181-184.
57 Cf. Gethmann 2023, 2 and 165 ff.; Neuhäuser et al. 2023, p. 215.
58 See Gleißner 2019.
59 from the Latin for the other way around, vice versa
60 Cf. on this paragraph Grunwald and Simonidis-Puschmann 2013, p. 18.
61 "Ex post" is a Latin word that means "after the fact" or "subsequently". Ex post refers to analyses or assessments that evaluate events, circumstances or conditions after their introduction or occurrence.
62 Cf. on this paragraph Grunwald and Simonidis-Puschmann 2013, p. 223.
63 Cf. on this paragraph Wagner 2003, 101 and 104.
64 Cf. on this paragraph Grunwald and Simonidis-Puschmann 2013, pp. 305-306.
65 Cf. on this paragraph Grunwald and Simonidis-Puschmann 2013, p. 306.
66 Cf. on this paragraph Grunwald and Simonidis-Puschmann 2013, p. 306.
67 Cf. on this paragraph Wagner 2003, p. 162.
68 Cf. on this and the following paragraphs Wagner 2003, pp. 162-165.
69 Cf. on this and the following paragraphs Nida-Rümelin 2012, 73-91, 99-100 and 102.
70 For this and the following four paragraphs, see Driver 2022.
71 Cf. Karnouskos 2020, p. 4; Geisslinger et al. 2021, p. 1038; Kant 1962, p. 421.
72 Cf. on this Schmidt 2023b, pp. 67-73.
73 See Finnis 2024.
74 Cf. this and the following paragraph Frankena 2017, 18 and 24-25.
75 Frankena 2017, p. 18.
76 Cf. on this paragraph Frankena 2017, p. 26-27.
77 Cf. this section Poszler et al. 2023, p. 3; Asimov 2013.
78 For this and the following paragraph, see Geisslinger et al. 2021, pp. 1038-1039; Scanlon 2000; Tiedemann 2023, 50 ff.
79 For this and the following paragraphs, see Frenz 2020, pp. 713-717.
80 also known colloquially as a corner case
81 Cf. on this paragraph Geisslinger et al. 2021, pp. 1038-1039.
82 Cf. on this paragraph Messner 2018, 68 et seq.
83 Cf. on this paragraph Messner 2018, p. 69-70.
84 Cf. this paragraph Heidbrink and Langbehn 2017, p. 173.
85 See Neuhäuser et al. 2023, p. 59-65.
86 For this and the following paragraph, see Poszler et al. 2023, pp. 3-4.
87 Cf. on this paragraph Nida-Rümelin 2012, p. 134.
88 Cf. on this paragraph Geisslinger et al. 2021, pp. 1038-1039.
89 Cf. on this paragraph Geisslinger et al. 2021, pp. 1038-1039.
90 Cf. on this paragraph Nida-Rümelin 2012, p. 134.
91 Cf. BVerfG 2006.
92 For this and the following paragraph, see Geisslinger et al. 2021, pp. 1038-1039.
93 Cf. on this paragraph Geisslinger et al. 2021, pp. 1038-1039.
94 Cf. on this paragraph Goodall 2014, p. 10.
95 See Driver 2022.
96 See Finnis 2024.
97 See Bosch Mobility.
98 See Volkswagen 2024.
99 Cf. on this paragraph Nyholm and Smids 2020, p. 336.
100 Cf. on this paragraph Nyholm and Smids 2020, p. 337.
101 Cf. European Union 2007; Pretschner et al. 2021.
102 Cf. Beck et al. 2001; Pretschner et al. 2021.
103 See Zuber 2022, p. 86.
104 Cf. Pretschner et al. 2021, pp. 348-349.
105 Cf. Research Service of the German Bundestag 2021.
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!
Kommentare