Repository logo
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
    Communities & Collections
    All of hohPublica
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
  1. Home
  2. Person

Browsing by Person "Uhl, Matthias"

Type the first few letters and click on the Browse button
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Publication
    Does a smarter ChatGPT become more utilitarian?
    (2026) Pfeffer, Jürgen; Krügel, Sebastian; Uhl, Matthias; Pfeffer, Jürgen; Technical University of Munich, TUM School of Social Sciences and Technology, Munich, Germany; Krügel, Sebastian; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany; Uhl, Matthias; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany
    Hundreds of millions of users interact with large language models (LLMs) regularly to get advice on all aspects of life. The increase in LLMs’ logical capabilities might be accompanied by unintended side effects with ethical implications. Focusing on recent model developments of ChatGPT, we can show clear evidence for a systematic shift in ethical stances that accompanied a leap in the models’ logical capabilities. Specifically, as ChatGPT’s capacity grows, it tends to give decisively more utilitarian answers to the two most famous dilemmas in ethics. Given the documented impact that LLMs have on users, we call for a research focus on the prevalence and dominance of ethical theories in LLMs as well as their potential shift over time. Moreover, our findings highlight the need for continuous monitoring and transparent public reporting of LLMs’ moral reasoning to ensure their informed and responsible use.
  • Loading...
    Thumbnail Image
    Publication
    Educational ideals affect AI acceptance in learning environments
    (2026) Richter, Florian; Uhl, Matthias; Richter, Florian; Catholic University of Eichstätt-Ingolstadt, Eichstätt, Germany; Uhl, Matthias; University of Hohenheim, Stuttgart, Germany
    AI is increasingly used in learning environments to monitor, test, and educate students and allow them to take more individualized learning paths. The success of AI in education will, however, require the acceptance of this technology by university management, faculty, and students. This acceptance will depend on the added value that stakeholders ascribe to this technology. In two empirical studies, we investigate the hitherto neglected question of which impact educational ideals have on the acceptance of AI in learning environments. We find clear evidence for our study participants’ conviction that humanistic educational ideals are considered less suitable for implementing AI in education than compentence-based ideals. This implies that research on the influence of teaching and learning philosophies could be an enlightening component of a comprehensive research program on human-AI interaction in educational contexts.

  • Contact
  • FAQ
  • Cookie settings
  • Imprint/Privacy policy