Institut für Bildung, Arbeit und Gesellschaft
Permanent URI for this collectionhttps://hohpublica.uni-hohenheim.de/handle/123456789/28
Browse
Browsing Institut für Bildung, Arbeit und Gesellschaft by Sustainable Development Goals "16"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Publication Autonomous weapons: considering the rights and interests of soldiers(2025) Haiden, Michael; Richter, FlorianThe development of autonomous weapons systems (AWSs), which would make decisions on the battlefield without direct input from humans, has the potential to dramatically change the nature of war. Due to the revolutionary potential of these technologies, it is essential to discuss their moral implications. While the academic literature often highlights their morally problematic nature, with some proposing outright banning them, this paper highlights an important benefit of AWSs: protecting the lives, as well as the mental and physical health of soldiers. If militaries can avoid sending humans into dangerous situations or relieve drone operators from tasks that lead to lifelong trauma, this obviously appears morally desirable – especially in a world where many soldiers are still drafted against their will. Nonetheless, there are many arguments against AWSs. However, we show that although AWSs are potentially dangerous, criticisms apply equally to human soldiers and weapons steered by them. The combination of both claims makes a strong case against a ban on AWSs where it is possible. Instead, researchers should focus on mitigating their drawbacks and refining their benefits.Publication Navigating the social dilemma of autonomous systems: normative and applied arguments(2025) Bodenschatz, AnjaAutonomous systems (ASs) become ubiquitous in society. For one specific ethical challenge, normative discussions are scarce: the social dilemma of autonomous systems (SDAS). This dilemma was assessed in empirical studies on autonomous vehicles (AVs). Many people generally agree to a utilitarian programming of ASs, but do not want to buy a machine that might sacrifice them deterministically. One possible way to mitigate the SDAS would be for ASs to randomize between options of action. This would bridge between a socially accepted program and the urge of potential AS users for some sense of self-protection. However, the normativity of randomization has not yet been evaluated for dilemmas between self-preservation and self-sacrifice for the “greater good” of saving several other lives. This paper closes this gap. It provides an overview of the most prominent normative and applied arguments for all three options of action in the dilemmas of interest: self-sacrifice, self-preservation, and randomization. As a prerequisite for inclusion in societal discussions on AS programming, it is ascertained that a normative argument can be elicited for each potential course of action in abstract thought experiments. The paper then progresses to discuss factors that may shift the normative claim between self-sacrifice, self-preservation, and randomization in the case of AV programming. The factors identified in this comparison are generalized into guiding dimensions for moral considerations along which all three options of action should be evaluated when programming ASs for dilemmas involving their users.
