Post-processed data for: Diverse operant control of different motor cortex populations during learning
Cite this dataset
Vendrell-Llopis, Nuria et al. (2022). Post-processed data for: Diverse operant control of different motor cortex populations during learning [Dataset]. Dryad. https://doi.org/10.6078/D1ZB0D
We tested two population of cortical neurons, intra-telencephalic (IT) and extra-telencephalic or pyramidal tract (PT) neurons, as direct neurons for neuroprosthetic control. We labelled different cell-classes in two groups of mice and trained them to modulate the activity of either IT or PT neurons to control a Ca-imaging based brain-machine interface (CaBMI). In addition, we used machine learning and game theory approaches to reverse-engineer the learning outcome and dissect the causal contributions of cell-class, neuronal activity, experimental confounds, and other features to learning.
Thus, we present a rich dataset with simultaneous 2-photon calcium imaging recordings of over a thousand neurons in the rodent motor cortex during neuroprosthetic learning. The dataset contains the result of CaImAn postprocessed data of four different planes spanning 400um.
CaBMI was performed using anatomically labelled IT or PT neurons as direct neurons.
Recordings of calcium imaging were performed with a Bruker Ultima Investigator (Bruker, Millerica, MA) using a Chameleon Ultra II Ti:Sapphire mode-locked laser (Coherent, Santa Clara, CA) tuned to 920 nm. Photons were collected with two GaAsP PMTs for different channels using an Olympus objective (XLUMPLFLN 20XW).
Each of the 4 imaging planes was separated into a block and independently analyzed with CaImAn to obtain the activity of each neuron during the recording
Please see manuscript for further methods.
This dataset was analyzed with Python 3.6 but working on Matlab is also possible.
The data for each animal is contained in a zip file and identified by the group they belonged (IT or PT) and a number (IT1, PT9, etc). Each zip file contains multiple sessions as HDF5 files. The names of these files contain the date of recordings as YYMMDD. HDF5 files can be opened/accessed with Python (h5py) or Matlab (h5read). See documentation for further information on how to work with HDF5 files on each platform.
Each of these h5py file contains the following keys:
- C -> CaImAn temporal information of identified components (possible neurons) NxM where N is the number of components and M are frames (both baseline and BMI).
- Nsparse -> Spatial information of identified components (possible neurons) as a sparse matrix.
- SNR -> SNR as resulted of CaImAn postprocessing. This value was only used to select valid neurons out of CaImAn components and does not represent the SNR explained in the methods
- array_miss -> Array with trials that resulted in failure.
- array_T1 -> Array with trials that were successful.
- base_im -> Background image in the form of NxNxM where N is the number of pixels and M is the number of planes
- com -> Location of components after locating the center of soma. See methods.
- com_cm -> Location of components as output of CaImAn.
- cursor -> BMI cursor of N frames (only BMI frames).
- dff -> dFF of each component NxM where N is the number of components and M are frames (both baseline and BMI).
- e2_neur -> Index of neurons belonging to ensemble 2.
- ens_neur -> Index of the ensemble neurons among the rest of components
- freq -> Frequency mapped from the online cursor with N frames (only BMI frames).
- hits -> Frames where the animal hit the target and ended the trial successfully
- miss -> Frames where a trial ended without hitting the target
- nerden -> Boolean array with a true value if the CaImAn component is identified as a valid neuron.
- neuron_act -> Spike activity (S in CaImAn) for each component. NxM where N is the number of components and M are frames (both baseline and BMI)
- online_data -> Online recordings of the BMI. NxM, N are occurrences (valid frames) where M is time, M are scanimage frames and the rest M[2:5] are the activity of the direct neurons as recorded online.
- red_im -> Background image for the red channel in the form of NxNxM where N is the number of pixels and M is the number of planes
- red_label -> Boolean array with a true value if the CaImAn component is identified as a red-labelled neuron.
- trial_end -> Frames when trials ended (can be miss or hits)
- trial_start -> Frames when trials started.
In addition we provide:
df_all -> pandas dataframe with the features used for XGBoost/SHAP
XGSHAP_model -> results for the model+explainer
synthetic_analysis -> zip with the dataframe + XGBoost/SHAP results for dependent and independent synthetic data
(Pandas dataframes can be opened in Python with the pandas package and imported as a table in Matlab)
National Institute of Neurological Disorders and Stroke, Award: 5U19NS104649-03
Army Research Office Award, Award: W911NF-16-1-0453