Does a smarter ChatGPT become more utilitarian?
| dc.contributor.author | Pfeffer, Jürgen | |
| dc.contributor.author | Krügel, Sebastian | |
| dc.contributor.author | Uhl, Matthias | |
| dc.contributor.corporate | Pfeffer, Jürgen; Technical University of Munich, TUM School of Social Sciences and Technology, Munich, Germany | |
| dc.contributor.corporate | Krügel, Sebastian; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany | |
| dc.contributor.corporate | Uhl, Matthias; Faculty of Business, Economics and Social Sciences, University of Hohenheim, Stuttgart, Germany | |
| dc.date.accessioned | 2026-01-28T10:54:39Z | |
| dc.date.available | 2026-01-28T10:54:39Z | |
| dc.date.issued | 2026 | |
| dc.date.updated | 2026-01-23T13:56:38Z | |
| dc.description.abstract | Hundreds of millions of users interact with large language models (LLMs) regularly to get advice on all aspects of life. The increase in LLMs’ logical capabilities might be accompanied by unintended side effects with ethical implications. Focusing on recent model developments of ChatGPT, we can show clear evidence for a systematic shift in ethical stances that accompanied a leap in the models’ logical capabilities. Specifically, as ChatGPT’s capacity grows, it tends to give decisively more utilitarian answers to the two most famous dilemmas in ethics. Given the documented impact that LLMs have on users, we call for a research focus on the prevalence and dominance of ethical theories in LLMs as well as their potential shift over time. Moreover, our findings highlight the need for continuous monitoring and transparent public reporting of LLMs’ moral reasoning to ensure their informed and responsible use. | en |
| dc.description.sponsorship | Open Access funding enabled and organized by Projekt DEAL. | |
| dc.description.sponsorship | Universität Hohenheim (3153) | |
| dc.identifier.uri | https://doi.org/10.1007/s11948-025-00579-4 | |
| dc.identifier.uri | https://hohpublica.uni-hohenheim.de/handle/123456789/18802 | |
| dc.language.iso | eng | |
| dc.rights.license | cc_by | |
| dc.subject | Large language models | |
| dc.subject | Utilitarianism | |
| dc.subject | Ethical theories | |
| dc.subject | ChatGPT | |
| dc.subject | Trolley dilemma | |
| dc.subject.ddc | 170 | |
| dc.title | Does a smarter ChatGPT become more utilitarian? | en |
| dc.type.dini | Article | |
| dcterms.bibliographicCitation | Science and engineering ethics, 32 (2026), 1, 1. https://doi.org/10.1007/s11948-025-00579-4. ISSN: 1471-5546 Dordrecht : Springer Netherlands | |
| dcterms.bibliographicCitation.articlenumber | 1 | |
| dcterms.bibliographicCitation.issn | 1471-5546 | |
| dcterms.bibliographicCitation.issue | 1 | |
| dcterms.bibliographicCitation.journaltitle | Science and engineering ethics | |
| dcterms.bibliographicCitation.originalpublishername | Springer Netherlands | |
| dcterms.bibliographicCitation.originalpublisherplace | Dordrecht | |
| dcterms.bibliographicCitation.volume | 32 | |
| local.export.bibtex | @article{Pfeffer2026, doi = {10.1007/s11948-025-00579-4}, author = {Pfeffer, Jürgen and Krügel, Sebastian and Uhl, Matthias et al.}, title = {Does a Smarter ChatGPT Become More Utilitarian?}, journal = {Science and Engineering Ethics}, year = {2026}, volume = {32}, number = {1}, } | |
| local.subject.sdg | 9 | |
| local.subject.sdg | 16 | |
| local.title.full | Does a Smarter ChatGPT Become More Utilitarian? |
