Bachelorarbeit, 2024
67 Seiten, Note: 1,3
1. Introduction:
1.1 Structure
1.2 Scope
2. Practices of decision making
3. Historic Context
4. Terminology
5. 1st Counterfactual: fewer civilian casualties
5.1 Reinvention of accuracy
5.2 Dual Use
5.3 System Destruction Warfare
6. 2nd Counterfactual: Safeguard and policy
6.1 International Ban problems
6.2 Human Meaningful Control
7. 3rd counterfactual: No defiance
7.1 Decorporealizing of state power
7.2 Air Power
8. Transition to Hypotheticals
9. 1st Hypothetical: adversarial AI on the battlefield
9.1 Command and Control
9.2 Adversarial AI Development
10. 2nd Hypothetical: long-term commitment
10.1 Shortcomings
10.2 No backing out
11. Conclusion
This work examines the narratives—both counterfactual and hypothetical—that justify the integration of data-based weapon systems into the US military's arsenal. By applying a cultural-scientific framework of decision-making, it analyzes how these narratives function to overcome doubts and justify technological adoption despite inherent risks and ethical concerns.
5.1 Reinvention of accuracy
However, as Lucy Suchman examines, civilian casualties result from an amalgamation of factors, not easily solved by relying on a technological solution. The author highlights the concept of Situational Awareness as the central motive to be enhanced by data-based technology. Rather than relying on the accurate perception of the environment by an individual soldier, data-based technology replaces the biased subjectivity with a sensory network. Nevertheless, this network remains flawed through its innate characteristics. Suchman presents a number of works that criticise the system of drone surveillance as a network of perception, inhibited by its presuppositions of enemy presence and positive identification, as well as decontextualization by silencing the observed realities.
Further, Suchman illustrates image recognition procedures by using Project Maven as an example. This federal project intended to use Googles Cloud infrastructure for video image labelling of drone surveillance. While the model was being trained on archived battlefield footage collected by drones, it required an initial set of 150 000 hand labelled images of objects across 38 categories. Yet, those categorisations remain unclear. It is neither known which categories were used, nor which criteria are required to constitute a threat for this trained data model.
1. Introduction: Outlines the scope of the thesis, focusing on how counterfactual and hypothetical narratives shape the discourse and justification for deploying data-based military technologies.
2. Practices of decision making: Establishes a theoretical framework for analyzing decision-making as a cultural and social process, emphasizing how narratives are used to manage contingency and doubt.
3. Historic Context: Briefly examines the historical conditions, particularly the "Global War on Terror," that necessitated the shift toward data-driven intelligence and drone warfare.
4. Terminology: Addresses the definitional challenges surrounding "Lethal Autonomous Weapon Systems" (LAWS) and justifies the use of the term "data-based military technology."
5. 1st Counterfactual: fewer civilian casualties: Analyzes the narrative claim that data-based systems reduce collateral damage, contrasting this with Suchman’s critique of the "Precision Shift."
6. 2nd Counterfactual: Safeguard and policy: Investigates the vague nature of policy proposals like "Meaningful Human Control" and how these reflect colonial-era attitudes toward governance.
7. 3rd counterfactual: No defiance: Explores how automated command structures aim to eliminate human agency and potential soldiers' refusal to follow orders, drawing parallels to historical "Air Power."
8. Transition to Hypotheticals: Provides a bridge between historical counterfactuals and prospective hypothetical scenarios that drive future military investments.
9. 1st Hypothetical: adversarial AI on the battlefield: Discusses the fear of being outpaced by adversarial AI and how this fear justifies sustained military investment.
10. 2nd Hypothetical: long-term commitment: Examines how initial investments create a "lock-in" effect where subsequent reliance on unproven technology becomes a strategic necessity.
11. Conclusion: Summarizes how neither counterfactual nor hypothetical narratives adequately address the deep-rooted technical and ethical challenges of autonomous military systems.
Data-based military technology, Artificial Intelligence, decision-making, counterfactual narratives, hypothetical scenarios, US military, command and control, situational awareness, Lethal Autonomous Weapon Systems, Meaningful Human Control, Global War on Terror, asymmetric warfare, algorithmic warfare, colonial traditions, military ethics.
This work investigates the narratives used by the US military to justify the deployment of data-based technologies and artificial intelligence in modern warfare.
The work explores military decision-making, the history of drone warfare, ethical implications of automation, the influence of colonial power dynamics, and the geopolitical pressures behind AI investment.
The goal is to analyze how counterfactual and hypothetical narratives function as sense-making mechanisms to legitimize technological military expansion despite existing criticism and significant technical shortcomings.
The author utilizes a cultural-scientific analysis of decision-making, focusing on decisions as social processes rather than outcomes, while incorporating qualitative criticism from sociological and historical perspectives.
The main body examines three main counterfactuals (precision, policy safeguards, and defiance) and two hypothetical scenarios (adversarial AI threat and long-term investment commitments) to assess the "procedural rationality" of these military choices.
Key terms include data-based military technology, Artificial Intelligence, decision-contingency, algorithmic warfare, colonialist traditions, and OODA loop strategy.
The author argues that "Meaningful Human Control" is an epistemologically narrow and vague policy concept that fails to address the power dynamics between the "controller" and the "controlled," effectively excluding the perspectives of those most vulnerable to these weapons.
The OODA loop serves as the standardized command procedure for kinetic engagement; the author analyzes how data-based technology is intended to accelerate this loop, often at the risk of human error and ethical failure.
The study suggests that the shift toward autonomous and data-driven systems creates a "tense and volatile" environment where AI-driven deterrence could potentially lead to unforeseen arms races and the further erosion of the "civilian" as a protected concept.
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!

