Repository logo
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
    Communities & Collections
    All of hohPublica
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
  1. Home
  2. Person

Browsing by Person "Richter, Florian"

Type the first few letters and click on the Browse button
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Publication
    Autonomous weapons: considering the rights and interests of soldiers
    (2025) Haiden, Michael; Richter, Florian
    The development of autonomous weapons systems (AWSs), which would make decisions on the battlefield without direct input from humans, has the potential to dramatically change the nature of war. Due to the revolutionary potential of these technologies, it is essential to discuss their moral implications. While the academic literature often highlights their morally problematic nature, with some proposing outright banning them, this paper highlights an important benefit of AWSs: protecting the lives, as well as the mental and physical health of soldiers. If militaries can avoid sending humans into dangerous situations or relieve drone operators from tasks that lead to lifelong trauma, this obviously appears morally desirable – especially in a world where many soldiers are still drafted against their will. Nonetheless, there are many arguments against AWSs. However, we show that although AWSs are potentially dangerous, criticisms apply equally to human soldiers and weapons steered by them. The combination of both claims makes a strong case against a ban on AWSs where it is possible. Instead, researchers should focus on mitigating their drawbacks and refining their benefits.
  • Loading...
    Thumbnail Image
    Publication
    Educational ideals affect AI acceptance in learning environments
    (2026) Richter, Florian; Uhl, Matthias; Richter, Florian; Catholic University of Eichstätt-Ingolstadt, Eichstätt, Germany; Uhl, Matthias; University of Hohenheim, Stuttgart, Germany
    AI is increasingly used in learning environments to monitor, test, and educate students and allow them to take more individualized learning paths. The success of AI in education will, however, require the acceptance of this technology by university management, faculty, and students. This acceptance will depend on the added value that stakeholders ascribe to this technology. In two empirical studies, we investigate the hitherto neglected question of which impact educational ideals have on the acceptance of AI in learning environments. We find clear evidence for our study participants’ conviction that humanistic educational ideals are considered less suitable for implementing AI in education than compentence-based ideals. This implies that research on the influence of teaching and learning philosophies could be an enlightening component of a comprehensive research program on human-AI interaction in educational contexts.

  • Contact
  • FAQ
  • Cookie settings
  • Imprint/Privacy policy