Repository logo
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
    Communities & Collections
    All of hohPublica
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
  1. Home
  2. Browse by Subject

Browsing by Subject "Large language models"

Type the first few letters and click on the Browse button
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Publication
    Does a smarter ChatGPT become more utilitarian?
    (2026) Pfeffer, Jürgen; Krügel, Sebastian; Uhl, Matthias; Pfeffer, Jürgen; Technical University of Munich, TUM School of Social Sciences and Technology, Munich, Germany; Krügel, Sebastian; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany; Uhl, Matthias; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany
    Hundreds of millions of users interact with large language models (LLMs) regularly to get advice on all aspects of life. The increase in LLMs’ logical capabilities might be accompanied by unintended side effects with ethical implications. Focusing on recent model developments of ChatGPT, we can show clear evidence for a systematic shift in ethical stances that accompanied a leap in the models’ logical capabilities. Specifically, as ChatGPT’s capacity grows, it tends to give decisively more utilitarian answers to the two most famous dilemmas in ethics. Given the documented impact that LLMs have on users, we call for a research focus on the prevalence and dominance of ethical theories in LLMs as well as their potential shift over time. Moreover, our findings highlight the need for continuous monitoring and transparent public reporting of LLMs’ moral reasoning to ensure their informed and responsible use.

  • Contact
  • FAQ
  • Cookie settings
  • Imprint/Privacy policy