Skip to main content
Dryad

Text understanding in GPT-4 vs humans

Data files

Apr 28, 2025 version files 488.32 KB

Abstract

We examine whether a leading AI system GPT-4 understands text as well as humans do, first using a well-established standardized test of discourse comprehension. On this test, GPT-4 performs slightly, but not statistically significantly, better than humans given the very high level of human performance. Both GPT-4 and humans make correct inferences about information that is not explicitly stated in the text, a critical test of understanding. Next, we use more difficult passages to determine whether that could allow larger differences between GPT-4 and humans. GPT-4 does considerably better on this more difficult text than do the high school and university students for whom these passages are designed, as admission tests of student reading comprehension. Deeper exploration of GPT-4’s performance on material from one of these admission tests reveals generally accepted signatures of genuine understanding, namely generalization and inference.