Skip to main content
Dryad

Reconfigurations of cortical manifold structure during reward-based motor learning

Cite this dataset

Gallivan, Jason et al. (2024). Reconfigurations of cortical manifold structure during reward-based motor learning [Dataset]. Dryad. https://doi.org/10.5061/dryad.7sqv9s512

Abstract

Adaptive motor behavior depends on the coordinated activity of multiple neural systems distributed across the brain. While the role of sensorimotor cortex in motor learning has been well-established, how higher-order brain systems interact with sensorimotor cortex to guide learning is less well understood. Using functional MRI, we examined human brain activity during a reward-based motor task where subjects learned to shape their hand trajectories through reinforcement feedback. We projected patterns of cortical and striatal functional connectivity onto a low-dimensional manifold space and examined how regions expanded and contracted along the manifold during learning. During early learning, we found that several sensorimotor areas in the Dorsal Attention Network exhibited increased covariance with areas of the salience/ventral attention network and reduced covariance with areas of the default mode network (DMN). During late learning, these effects reversed, with sensorimotor areas now exhibiting increased covariance with DMN areas. However, areas in posteromedial cortex showed the opposite pattern across learning phases, with its connectivity suggesting a role in coordinating activity across different networks over time. Our results establish the neural changes that support reward-based motor learning and identify distinct transitions in the functional coupling of sensorimotor to transmodal cortex when adapting behavior.

README: Reconfigurations of cortical manifold structure during reward-based motor learning

https://doi.org/10.5061/dryad.7sqv9s512

Welcome to the data repository for the paper by Nick et al., (2024), published in eLife. This repository contains the behavioural and preprocessed fMRI timeseries data used in the paper.

Description of the data and file structure

There are two main sets of data files included in this repository:

(1) fMRI Timeseries data.

This includes a .csv file for each task epoch (Baseline, Early and Late learning) and each subject (N=36). E.g., the file "ts_1_baseline.csv" includes the timeseries data for subject # 1 for 1000 Schaefer cortical brain regions and 14 subcortical regions from the Harvard-Oxford parcellation (left and right Thalamus, Putamen, Pallidum, Hippocampus, Amygdala, Accumbens). Each timeseries is 216 imaging volumes long (see Methods details above) and the column headers denote the name of individual brain regions. 

(2) Behavioral Learning data.

This includes two files (A and B below):

A) Behavioraldata.csv contains trial-level binned learning data (average of 10 trials) in long format, with the following information:

      SubjectNumber = Subject number (corresponding to the fMRI timeseries data)

      SubjectName = Sub1, Sub 2, etc.

      Block = denotes the task epoch block (Baseline or Learning trials)

      BinNumber = bins of 10 trials (e.g., BinNumber 1 denotes the first 10 trials, etc.)

      Score = denotes the average learning score (see Methods above) for each bin

      RT = denotes the average reaction time (see Methods above) for each bin

      MT = denotes the average movement time (see Methods above) for each bin

      TrajectoryXPosition_# column headers: denote the mean x position (see Methods above) of the average movement
      trajectory for each bin.

B) fpcaScores.csv contains each subjects' single fPCA score (used in the brain-behavior correlation analyses). The fPCA score describes their overall pattern of learning (see Methods above). 

Sharing/Access information

Other publically available versions of the data:

Code/Software

Code for data analysis can be downloaded at the first author's (Qasem Nick) Github profile:\
[https://github.com/qniksefat/cortical-manifolds-in-reward-based-motor-learning\](https://github.com/qniksefat/cortical-manifolds-in-reward-based-motor-learning\)

All details for running the code and performing the analyses can be found at the python notebook: elife_submission.py

Methods

Description of the reward-based motor learning task

In this task, subjects (N=36) used their right finger on an MRI-compatible touchpad to trace, without visual feedback of their finger, a rightward-curved path displayed on a screen (see Fig. 1A,B in the paper). Participants began the MRI study by performing a *Baseline *block of 70 trials, wherein they did not receive any feedback about their performance. Following this, subjects began a separate *Learning *block of 200 trials in which they were told that they would now receive score feedback (from 0 to 100 points), presented at the end of each trial, based on how accurately they traced the visual path displayed on the screen. However, unbeknownst to subjects, the score they actually received was based on how well they traced a *hidden *mirror-image path (the ‘reward’ path, which was reflected across the vertical axis; see Fig. 1C in the paper). Importantly, because subjects received no visual feedback about their actual finger trajectory and could not see their own hand, they could only use the score feedback — and thus only reward-based learning mechanisms — to modify their movements from one trial to the next (Dam et al., 2013; Wu et al., 2014). That is, subjects could not use error-based learning mechanisms to achieve learning in our study, as this form of learning requires sensory errors that convey both the change in direction and magnitude needed to correct the movement.

Each trial started with the participant moving a cursor (3 mm radius cyan circle), which represented their finger position, into the start position (4 mm radius white circle) at the bottom of the screen (by sliding the index finger on the tablet). The cursor was only visible when it was within 30 mm of the start position. After the cursor was held within the start position for 0.5 s, the cursor disappeared and a rightward-curved path (Visible Path) and a movement distance marker appeared on the screen (see Fig. 1B in the paper). The movement distance marker was a horizontal red line (30 x 1 mm) that appeared 60 mm above the start position. The visible path connected the start position and movement distance marker, and had the shape of a half sine wave with an amplitude of 0.15 times the marker distance. Participants were instructed to trace the curved path. When the cursor reached the target distance, the target changed color from red to green to indicate that the trial was completed. Importantly, other than this color change in the distance marker, the visible curved path remained constant and participants never received any feedback about the position of their cursor.

In the baseline block, participants did not receive any feedback about their performance. In the learning block, participants were rewarded 0 to 100 points after reaching the movement distance marker, and participants were instructed to do their best to maximize this score across trials (following the movement, the points were displayed as text centrally on the screen). Each trial was terminated after 4.5 s, independent of whether the cursor had reached the target. After a delay of 1.5 s  (during which the screen was blanked), allowing time to save the data and the subject to return to the starting location, the next trial started with the presentation of the start position.

To calculate the reward score on each trial in the learning block, the x position of the cursor was interpolated at each cm displacement from the start position in the y direction (i.e., at exactly 10, 20, 30, 40, 50 and 60 mm). For each of the six y positions, the absolute distance between the interpolated x position of the cursor and the x position of the rewarded path was calculated. The sum of these errors was scaled by dividing it by the sum of errors obtained for a half cycle sine-shaped path with an amplitude of 0.5 times the target distance, and then multiplied by 100 to obtain a score ranging between 0 and 100. The scaling worked out such that a perfectly traced visible path would result in an imperfect score of 40 points. 

Behavioral Data Analyses
 
Data Preprocessing:

Each movement trajectory was first re-sampled to 10 equally spaced points along the y (vertical) axis, between the starting position and the target distance marker. We defined subjects’ reaction time (RT) as the time between trial onset and the cursor reaching 10% of the distance from the starting location, and defined subjects’ movement time (MT) as the remaining time until reaching the target distance marker. Trials in which the cursor did not reach the target within the time limit were excluded from the offline analysis of hand movements (~1% of trials). As insufficient pressure on the touchpad resulted in a default state in which the cursor was reported as lying in the top left corner of the screen, we excluded trials in which the cursor jumped to this position before reaching the target region (~2% of trials). We then applied a conservative threshold on the movement and reaction times, removing the top 0.05% of trials across all subjects. As the motor task did not involve response discrimination, we did not set a lower threshold on these variables.

Functional PCA of subject behavioral data:

 For the fPCA analysis, all subject behavioral data were averaged over 8 trial bins. We represented individual learning curves as functional data using a cubic spline basis with smoothing penalty estimated by generalized cross-validation (Härdle, 1990). We then performed *functional PCA *(Ramsay and Silverman, 2013), which allowed us to extract components capturing the dominant patterns of variability in subject performance. Using this analysis, we found that the top component, which describes overall learning, explained a majority of the variability (~75%) in performance. Spline smoothing and fPCA were performed using the R package fda (Ramsay, J., Wickham, H., Ramsay, M. J., and deSolve, S., 2022).

fMRI Data Analyses
 
MRI Acquisition:

Participants were scanned using a 3-Tesla Siemens TIM MAGNETOM Trio MRI scanner located at the Centre for Neuroscience Studies, Queen’s University (Kingston, Ontario, Canada). Subject anatomicals were acquired using a 32-channel head coil and a T1-weighted ADNI MPRAGE sequence (TR = 1760 ms, TE = 2.98 ms, field of view = 192 mm x 240 mm x 256 mm, matrix size = 192 x 240 x 256, flip angle = 9°, 1 mm isotropic voxels). We acquired functional MRI volumes using a T2-weighted single-shot gradient-echo echo-planar imaging (EPI) acquisition sequence (time to repetition (TR) = 2000 ms, slice thickness = 4 mm, in-plane resolution = 3 mm x 3 mm, time to echo (TE) = 30 ms, field of view = 240 mm x 240 mm, matrix size = 80 x 80, flip angle = 90°, and acceleration factor (integrated parallel acquisition technologies, iPAT) = 2 with generalized auto-calibrating partially parallel acquisitions (GRAPPA) reconstruction. Each volume comprised 34 contiguous (no gap) oblique slices acquired at a ~30° caudal tilt with respect to the plane of the anterior and posterior commissure (AC-PC), providing whole-brain coverage of the cerebrum and cerebellum. Note that for the current study, we did not examine changes in cerebellar activity during learning. For the baseline and learning scans, we acquired 222 and 612 imaging volumes, respectively. Each of these task-related scans included an additional 6 imaging volumes at both the beginning and end of the scan. 

Preprocessing of fMRI data:

Preprocessing of anatomical and functional MRI data was performed using fMRIPrep 20.1.1 (Esteban et al., 2019, n.d.)(Abraham et al. 2014)(RRID:SCR_016216) which is based on Nipype 1.5.0 (Gorgolewski et al., 2011, 2018)(RRID:SCR_002502). Many internal operations of fMRIPrep use Nilearn 0.6.2 (Abraham et al., 2014)(RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep’s documentation. Also, see the research paper for further details. 

Regional time series extraction:

For each participant and scan, the average BOLD time series were computed from the grayordinate time series for (1) each of the 998 regions defined according to the Schaefer 1000 parcellation (Schaefer et al., 2018); two regions are removed from the parcellation due to their small parcel size) and, (2) each of the 12 striatal regions defined according to the Harvard Oxford atlas (Frazier et al., 2005; Makris et al., 2006), which included the caudate, putamen, accumbens, pallidum, hippocampus and amygdala. Region timeseries were denoised using the above-mentioned confound regressors in conjunction with the discrete cosine regressors (128s cut-off for high-pass filtering) produced from fMRIprep and low-pass filtering using a Butterworth filter (100s cut-off) implemented in Nilearn. Finally, all region timeseries were z-scored. 

For every participant, region timeseries from the task scans were spliced into three equal-lengthed task epochs (216 imaging volumes each), after having discarded the first 6 imaging volumes (thus avoiding scanner equilibrium effects). This allowed us to estimate functional connectivity from continuous brain activity over the corresponding 70 trials for each epoch; Baseline comprised of the initial 70 trials in which subjects performed the motor task in the absence of any reward feedback, whereas the Early and Late learning epochs consisted of the first and last 70 trials after the onset of reward feedback, respectively. 

References

Abraham A, Pedregosa F, Eickenberg M, Gervais P, Mueller A, Kossaifi J, Gramfort A, Thirion B, Varoquaux G. 2014. Machine learning for neuroimaging with scikit-learn. Front Neuroinform 8:14.

Dam G, Kording K, Wei K. 2013. Credit assignment during movement reinforcement learning. PLoS One 8:e55352.

Esteban O, Blair R, Markiewicz CJ, Berleant SL. n.d. fMRIPrep. Software. Zenodo.

Esteban O, Markiewicz CJ, Blair RW, Moodie CA, Isik AI, Erramuzpe A, Kent JD, Goncalves M, DuPre E, Snyder M, Oya H, Ghosh SS, Wright J, Durnez J, Poldrack RA, Gorgolewski KJ. 2019. fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat Methods 16:111–116.

Frazier JA, Chiu S, Breeze JL, Makris N, Lange N, Kennedy DN, Herbert MR, Bent EK, Koneru VK, Dieterich ME, Hodge SM, Rauch SL, Grant PE, Cohen BM, Seidman LJ, Caviness VS, Biederman J. 2005. Structural brain magnetic resonance imaging of limbic and thalamic volumes in pediatric bipolar disorder. Am J Psychiatry 162:1256–1265.

Gorgolewski K, Burns CD, Madison C, Clark D, Halchenko YO, Waskom ML, Ghosh SS. 2011. Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform 5:13.

Gorgolewski KJ, Esteban O, Markiewicz CJ, Ziegler E, Ellis DG, Notter MP, Jarecka D, Johnson H, Burns C, Manhães-Savio A. 2018. Nipype [Software]. Zenodo.

Härdle W. 1990. Applied Nonparametric Regression. doi:10.1017/ccol0521382483

Makris N, Goldstein JM, Kennedy D, Hodge SM, Caviness VS, Faraone SV, Tsuang MT, Seidman LJ. 2006. Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophrenia Research. doi:10.1016/j.schres.2005.11.020

Ramsay J, Silverman BW. 2013. Functional Data Analysis. Springer Science & Business Media.

Ramsay, J., Wickham, H., Ramsay, M. J., and deSolve, S. 2022. Package “fda.”

Schaefer A, Kong R, Gordon EM, Laumann TO, Zuo X-N, Holmes AJ, Eickhoff SB, Yeo BTT. 2018. Local-Global Parcellation of the Human Cerebral Cortex from Intrinsic Functional Connectivity MRI. Cereb Cortex 28:3095–3114.

Wu HG, Miyamoto YR, Gonzalez Castro LN, Ölveczky BP, Smith MA. 2014. Temporal structure of motor variability is dynamically regulated and predicts motor learning ability. Nat Neurosci 17:312–321.

Funding

Canadian Institutes of Health Research, Award: PJT175012, Neurosciences, Mental Health and Addiction

Natural Sciences and Engineering Research Council, Award: RGPIN-2017-04684