Inducing representational change in the hippocampus through real-time neurofeedback
Data files
Oct 09, 2024 version files 21.93 GB
-
README.md
3.66 KB
-
sub003.zip
916.08 MB
-
sub004.zip
993.55 MB
-
sub005.zip
1.12 GB
-
sub006.zip
1.10 GB
-
sub008.zip
1.16 GB
-
sub009.zip
1.06 GB
-
sub012.zip
1.17 GB
-
sub013.zip
1.32 GB
-
sub014.zip
998.86 MB
-
sub015.zip
1.12 GB
-
sub018.zip
1.14 GB
-
sub021.zip
1.13 GB
-
sub022.zip
1.08 GB
-
sub023.zip
1.15 GB
-
sub024.zip
1.11 GB
-
sub026.zip
1.12 GB
-
sub027.zip
1.02 GB
-
sub029.zip
1.13 GB
-
sub030.zip
931.38 MB
-
sub031.zip
1.17 GB
Abstract
When you perceive or remember something, other related things come to mind, affecting how these competing items are subsequently perceived and remembered. Such behavioral consequences are believed to result from changes in the overlap of neural representations of these items, especially in the hippocampus. According to multiple theories, hippocampal overlap should increase (integration) when there is high coactivation between cortical representations. However, prior studies used indirect proxies for coactivation, by manipulating stimulus similarity or task demands. Here we induce coactivation in the visual cortex more directly using closed-loop neurofeedback from real-time fMRI. While viewing one object, participants were rewarded for activating the representation of another object as strongly as possible. Across multiple real-time fMRI sessions, they succeeded in using the neurofeedback to induce coactivation. Compared with untrained objects, this coactivation led to memory integration in behavior and the brain: Trained objects became harder for participants to discriminate behaviorally in a categorical perception task and harder to discriminate neurally from patterns of fMRI activity in their hippocampus as a result of losing unique features. These findings demonstrate that neurofeedback can be used to alter and combine memories.
README: Inducing representational change in the hippocampus through real-time neurofeedback
Citation of associated article: Peng K, Wammes JD, Nguyen A, Iordan CR, Norman KA, Turk-Browne NB. (2024) Inducing representational change in the hippocampus through real-time neurofeedback. Phil. Trans. R. Soc. B 379: 20230091. https://doi.org/10.1098/rstb.2023.0091
The dataset can be accessed at https://doi.org/10.5061/dryad.kd51c5bg2
This directory contains de-identified pre-processed real-time fMRI data and raw behavioral data used for the analyses in the manuscript. To analyze this data, you should refer to the GitHub link in the Related Works section. This GitHub repo utilizes these files to recreate the figures reported in the paper. For this purpose, the contents of this directory are expected to be in a folder called data/
.
The fMRI data is de-identified with FSL BET, which removes the face and extracts the brain.
All files with the suffix '.nii.gz' or '.nii' can be opened using the freely available fMRIB Software Library (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) or open-source software FreeSurfer (https://surfer.nmr.mgh.harvard.edu/).
Description of the data and file structure
Data is organized separately for each participant sub0??. Data from 5 sessions are separately saved in a folder ses[1-5]
.
data\subjects\sub0??\ses?\recognition\run_?_bet.nii: fMRI data after motion correction, field map unwarping, defacing, and skull stripping during recognition run.
data\subjects\sub0??\ses?\feedback\run_?_bet.nii: fMRI data after motion correction, fieldmap unwarping, defacing, and skull stripping during feedback run.
data\subjects\sub0??\ses?\recognition\sub0??_?.csv: raw behavioral and timing data during the corresponding recognition run.
data\subjects\sub0??\ses?\recognition\behav_run?.csv: behavioral and timing data during the corresponding recognition run after being preprocessed by function recognition_preprocess
.
data\subjects\sub0??\ses?\recognition\mask\chosenMask.npy: the mega ROI that is used for real-time neurofeedback during feedback runs.
data\subjects\sub0??\ses?\recognition\mask\[V1/V2/LOC/IT/Fus/PHC]_FreeSurfer.nii: the ROI that is used for ROI analysis.
data\subjects\sub0??\ses?\recognition\mask\lfseg_corr_usegray_[1-7/hippocampus].nii: the hippocampal ROI that is used for ROI analysis.
Hippocampal subfields ID:
hippoSubfieldID = {
1: "CA1",
2: "CA2+3",
3: "DG",
4: "ERC",
5: "PHC",
6: "PRC",
7: "SUB"
}
data\subjects\sub0??\ses[1/5]\catPer\catPer_000000sub0??_[1/2]].txt: Raw data for categorical perception task in session 1 and session 5.
data\subjects\sub0??\ses?\fmap: Field map files that are used for field map unwarping.
data\subjects\sub0??\ses?\runRecording.csv: Record of the run type for different scan.
data\subjects\sub0??\adaptiveThreshold.csv: Record of the adaptive threshold for each subject.
Code/Software
The scripts in the real_time_neurofeedback repository can be used to run the analyses reported in the paper.
The main.py
script indicates which code generates each figure. Scripts in that directory can rerun the analyses, refer to the README in the real_time_neurofeedback repository for more direction.
Questions about the data or analyses can be directed to Kailong Peng, kailong.peng@yale.edu or kailongpeng001@gmail.com
Methods
Data were acquired using a 3T Siemens Prisma scanner with a 64-channel head coil at the Brain Imaging Center at Yale University.
The scan sequences are as follows:
For recognition and feedback functional runs, an echo-planar imaging (EPI) sequence was used to collect BOLD data (TR=2 s; TE=30 ms; voxel size=3 mm isotropic; FA=90°; IPAT GRAPPA acceleration factor=2; distance factor=25%), yielding 36 axial slices. Each recognition run contained 145 volumes and each feedback run contained 176 volumes. Two field map volumes (TR=5 s; TE=80 ms; otherwise matching the EPI scans) were acquired in opposite phase encoding directions. For anatomical alignment and visualization, we collected a 3D T1-weighted magnetization-prepared rapid acquisition gradient echo (MPRAGE) scan (TR=2.5 s; TE=2.9 ms; voxel size=1 mm isotropic; FA=8°; 176 sagittal slices; IPAT GRAPPA acceleration factor=2), and a 3D T2-weighted fast spin echo scan with variable flip angle (TR=3.2 s; TE=565 ms; voxel size=1 mm isotropic; 176 sagittal slices; IPAT GRAPPA acceleration factor=2).