Fakultät Wirtschafts- und Sozialwissenschaften
Permanent URI for this communityhttps://hohpublica.uni-hohenheim.de/handle/123456789/22
Die Fakultät vereint Forschung und moderne Lehre nach internationalen Standards. Das Hohenheimer Modell verzahnt dabei betriebs- und volkswirtschaftliche, sozial- und rechtswissenschaftliche Aspekte.
Homepage: https://wiso.uni-hohenheim.de/
Browse
Browsing Fakultät Wirtschafts- und Sozialwissenschaften by Classification "170"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Publication Autonomous weapons: considering the rights and interests of soldiers(2025) Haiden, Michael; Richter, FlorianThe development of autonomous weapons systems (AWSs), which would make decisions on the battlefield without direct input from humans, has the potential to dramatically change the nature of war. Due to the revolutionary potential of these technologies, it is essential to discuss their moral implications. While the academic literature often highlights their morally problematic nature, with some proposing outright banning them, this paper highlights an important benefit of AWSs: protecting the lives, as well as the mental and physical health of soldiers. If militaries can avoid sending humans into dangerous situations or relieve drone operators from tasks that lead to lifelong trauma, this obviously appears morally desirable – especially in a world where many soldiers are still drafted against their will. Nonetheless, there are many arguments against AWSs. However, we show that although AWSs are potentially dangerous, criticisms apply equally to human soldiers and weapons steered by them. The combination of both claims makes a strong case against a ban on AWSs where it is possible. Instead, researchers should focus on mitigating their drawbacks and refining their benefits.Publication Does a smarter ChatGPT become more utilitarian?(2026) Pfeffer, Jürgen; Krügel, Sebastian; Uhl, Matthias; Pfeffer, Jürgen; Technical University of Munich, TUM School of Social Sciences and Technology, Munich, Germany; Krügel, Sebastian; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany; Uhl, Matthias; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, GermanyHundreds of millions of users interact with large language models (LLMs) regularly to get advice on all aspects of life. The increase in LLMs’ logical capabilities might be accompanied by unintended side effects with ethical implications. Focusing on recent model developments of ChatGPT, we can show clear evidence for a systematic shift in ethical stances that accompanied a leap in the models’ logical capabilities. Specifically, as ChatGPT’s capacity grows, it tends to give decisively more utilitarian answers to the two most famous dilemmas in ethics. Given the documented impact that LLMs have on users, we call for a research focus on the prevalence and dominance of ethical theories in LLMs as well as their potential shift over time. Moreover, our findings highlight the need for continuous monitoring and transparent public reporting of LLMs’ moral reasoning to ensure their informed and responsible use.Publication Editorial: Responsible research and innovation as a toolkit: Indicators, application, and context(2023) Buchmann, Tobias; Dreyer, Marion; Müller, Matthias; Pyka, AndreasPublication HCI driving alienation: autonomy and involvement as blind spots in digital ethics(2024) Jungtäubl, Marc; Zirnig, Christopher; Ruiner, CarolineThe ongoing development and adoption of digital technologies such as AI in business brings ethical concerns and challenges. Main topics are the design of digital technologies, their tasks, and competencies in organizational practice, and their collaboration with humans. Previous guidelines on digital ethics mainly consider technological aspects such as the nondiscriminatory design of AI, its transparency, and technically constrained (distributed) agency as priorities in AI systems, leaving the consideration of the human factor and the implementation of ethical guidelines in organizational practice unclear. We analyze the relationship between human–computer interaction (HCI), autonomy, and worker involvement with its impact on the experience of alienation at work for workers. We argue that the consideration of autonomy and worker involvement is crucial for HCI. Based on a quantitative empirical study of 1989 workers in Germany, the analysis shows that when worker involvement is high, the effect of HCI use on alienation decreases. The study results contribute to the understanding of the use of digital technologies with regard to worker involvement, reveal a blind spot in widespread ethical debates about AI, and have practical implications with regard to digital ethics in organizational practice.Publication Navigating the social dilemma of autonomous systems: normative and applied arguments(2025) Bodenschatz, AnjaAutonomous systems (ASs) become ubiquitous in society. For one specific ethical challenge, normative discussions are scarce: the social dilemma of autonomous systems (SDAS). This dilemma was assessed in empirical studies on autonomous vehicles (AVs). Many people generally agree to a utilitarian programming of ASs, but do not want to buy a machine that might sacrifice them deterministically. One possible way to mitigate the SDAS would be for ASs to randomize between options of action. This would bridge between a socially accepted program and the urge of potential AS users for some sense of self-protection. However, the normativity of randomization has not yet been evaluated for dilemmas between self-preservation and self-sacrifice for the “greater good” of saving several other lives. This paper closes this gap. It provides an overview of the most prominent normative and applied arguments for all three options of action in the dilemmas of interest: self-sacrifice, self-preservation, and randomization. As a prerequisite for inclusion in societal discussions on AS programming, it is ascertained that a normative argument can be elicited for each potential course of action in abstract thought experiments. The paper then progresses to discuss factors that may shift the normative claim between self-sacrifice, self-preservation, and randomization in the case of AV programming. The factors identified in this comparison are generalized into guiding dimensions for moral considerations along which all three options of action should be evaluated when programming ASs for dilemmas involving their users.
