Cross-modal representation of identity in primate hippocampus
Data files
Sep 23, 2022 version files 4.11 MB
-
README.txt
-
TyreeEtAl_data.zip
Abstract
Faces and voices are the dominant social signals used to recognize individuals amongst human and nonhuman primates. Yet, evidence that information across these signals can be integrated into a modality-independent representation of individual identity in the primate brain has been reported only in human patients. Here we show that, like humans, single neurons in the marmoset monkey hippocampus exhibit invariant neural responses when presented with the faces or voices of specific individuals. However, we also identified a population of single neurons in hippocampus that was responsive to the cross-modal identity of multiple conspecifics, not only a single individual. An identity network model revealed population-level, cross-modal representations of individuals in hippocampus, underscoring the broader contributions of many neurons to encode identity. This pattern was further evidenced by manifold projections of population activity which likewise showed separability of individuals, as well as clustering for family members, suggesting that multiple learned social categories are encoded as related dimensions of identity in hippocampus. The constellation of findings presented here reveals a novel perspective on the neural basis of identity representations in primate hippocampus as being both invariant to modality and comprising multiple levels of acquired social knowledge.
Methods
Neurophysiological recordings were performed using 64-channel microwire brush arrays while subjects were head and body restrained in a chair. Visual stimuli were presented on a LED screen positioned 24 cm in front of the animal. Acoustic stimuli were presented at 70–80 dB SPL. All behavior was collected in an anechoic chamber illuminated only by the screen. Stimulus presentation was controlled using custom software and eye position was monitored by infrared camera tracking of the pupil. Subjects initiated trials by holding fixation of gaze for 100ms at a center fixation dot on the screen, at which point stimulus presentation was initiated. The stimulus consisted of the face and/or voice of a familiar conspecific.
Neurophysiological recordings were processed into spike times using the open-source package, Spyking Circus [1], which were then included in the dataset using custom MATLAB and python scripts. Handling of common recording errors is demonstrated in at the beginning of the example notebook.
Four adult marmosets (2 male, 2 female) served as subjects in these experiments. All animals are socially housed with 2–8 conspecifics in the Cortical Systems and Behavior Laboratory at the University of California San Diego (UCSD). All procedures were approved by the Institutional Animal Care and Use Committee at the University of California San Diego and follow National Institutes of Health guidelines.
[1] Yger P., Spampinato, G.L.B, Esposito E., Lefebvre B., Deny S., Gardella C., Stimberg M., Jetter F., Zeck G. Picaud S., Duebel J., Marre O., A spike sorting toolbox for up to thousands of electrodes validated with ground truth recordings in vitro and in vivo, eLife 2018;7:e34518
Usage notes
No specialized programs and/or software are required to open the data. The data is in a human-readable format that can be opened in most text editors.