Skip to main content
Dryad

Robustness of large language models in moral judgments

Data files

Feb 27, 2025 version files 301.73 MB

Click names to download individual files

Abstract

Large language models (LLMs) are used for an increasing variety of tasks, some of which may even have effects on decision making. Therefore, there has been an increasing interest in understanding how societal norms and moral judgments may be reflected in the output of LLMs. Recent work has therefore tested LLMs on various moral judgment tasks and drawn conclusions regarding the similarities between LLMs and humans. The present contribution critically assesses the validity of the method and results employed in previous work for eliciting moral judgments from LLMs. We find that previous results are confounded by biases in the presentation of the options in moral judgment tasks, and that LLM responses are highly sensitive to prompt formulation variants as simple as changing "Case 1" and "Case 2" to "(A)" and "(B)". Our results hence indicate that previous conclusions on moral judgments of LLMs cannot be upheld. We make recommendations for more sound methodological setups for future studies.