Skip to main content
Dryad

Inductive biases of neural network modularity in spatial navigation

Data files

Jun 19, 2024 version files 5.25 GB

Abstract

The brain may have evolved a modular architecture for reward-based learning in daily tasks, with circuits featuring functionally specialized modules that match the task structure. We propose that this architecture enables better learning and generalization than architectures with less specialized modules. To test this hypothesis, we trained reinforcement learning agents with various neural architectures on a naturalistic navigation task. We found that the architecture that largely segregates computations of state representation, value, and action into specialized modules enables more efficient learning and better generalization. The behavior of agents with this modular architecture also resembles macaque behaviors more closely. Investigating the latent state computations in these agents, we discovered that the learned state representation combines prediction and observation, weighted by their relative uncertainty, akin to a Kalman filter. These results shed light on the possible rationale for the brain's modular specializations and suggest that artificial systems can use this insight from neuroscience to improve learning and generalization in natural tasks.