Data for: Hippocampal place codes are gated by behavioral engagement
Data files
Mar 23, 2022 version files 26.44 GB
-
ny117_20190518_behavior.mat
1.87 MB
-
ny117_20190518_neural.mat
41.54 MB
-
ny119_20190402_behavior.mat
2.05 MB
-
ny119_20190402_neural.mat
213.40 MB
-
ny119_20190403_behavior.mat
2.02 MB
-
ny119_20190403_neural.mat
86.12 MB
-
ny119_20190404_behavior.mat
2.03 MB
-
ny119_20190404_neural.mat
87.54 MB
-
ny119_20190408_behavior.mat
2.05 MB
-
ny119_20190408_neural.mat
231.07 MB
-
ny119_20190515_behavior.mat
2.06 MB
-
ny119_20190515_neural.mat
79.52 MB
-
ny144_20190829_behavior.mat
2.10 MB
-
ny144_20190829_neural.mat
632.42 MB
-
ny144_20190830_behavior.mat
1.41 MB
-
ny144_20190830_neural.mat
429.90 MB
-
ny144_20190831_behavior.mat
1.33 MB
-
ny144_20190831_neural.mat
393.45 MB
-
ny144_20190901_behavior.mat
1.99 MB
-
ny144_20190901_neural.mat
451.88 MB
-
ny144_20190903_behavior.mat
2.05 MB
-
ny144_20190903_neural.mat
674.86 MB
-
ny144_20190904_behavior.mat
2.01 MB
-
ny144_20190904_neural.mat
676.35 MB
-
ny144_20190910_behavior.mat
2.07 MB
-
ny144_20190910_neural.mat
571.51 MB
-
ny144_20190911_behavior.mat
2 MB
-
ny144_20190911_neural.mat
608.42 MB
-
ny182_20191205_behavior.mat
2.41 MB
-
ny182_20191205_neural.mat
800.79 MB
-
ny182_20191207_behavior.mat
1.63 MB
-
ny182_20191207_neural.mat
524.33 MB
-
ny182_20191209_behavior.mat
3.67 MB
-
ny182_20191209_neural.mat
1.19 GB
-
ny188_20191207_behavior.mat
1.99 MB
-
ny188_20191207_neural.mat
612.44 MB
-
ny188_20191209_behavior.mat
1.80 MB
-
ny188_20191209_neural.mat
562.93 MB
-
ny188_20191210_behavior.mat
1.65 MB
-
ny188_20191210_neural.mat
462.47 MB
-
ny188_20191211_behavior.mat
1.33 MB
-
ny188_20191211_neural.mat
242.51 MB
-
ny188_20191213_behavior.mat
945.64 KB
-
ny188_20191213_neural.mat
115.90 MB
-
ny211_20210707_behavior.mat
4.25 MB
-
ny211_20210707_neural.mat
1.78 GB
-
ny211_20210708_behavior.mat
4.02 MB
-
ny211_20210708_neural.mat
1.87 GB
-
ny211_20210712_behavior.mat
3.33 MB
-
ny211_20210712_neural.mat
1.24 GB
-
ny211_20210810_behavior.mat
3.87 MB
-
ny211_20210810_neural.mat
1.56 GB
-
ny225_20211012_behavior.mat
4.22 MB
-
ny225_20211012_neural.mat
1.90 GB
-
ny225_20211014_behavior.mat
4.19 MB
-
ny225_20211014_neural.mat
1.35 GB
-
ny226_20210928_behavior.mat
2.53 MB
-
ny226_20210928_neural.mat
293.13 MB
-
ny226_20210929_behavior.mat
2.72 MB
-
ny226_20210929_neural.mat
225.38 MB
-
ny226_20210930_behavior.mat
2.81 MB
-
ny226_20210930_neural.mat
317.11 MB
-
ny226_20211010_behavior.mat
2.24 MB
-
ny226_20211010_neural.mat
265.40 MB
-
ny226_20211012_behavior.mat
4 MB
-
ny226_20211012_neural.mat
1.25 GB
-
ny226_20211013_behavior.mat
2.72 MB
-
ny226_20211013_neural.mat
496.49 MB
-
ny226_20211014_behavior.mat
4.03 MB
-
ny226_20211014_neural.mat
916.29 MB
-
ny226_20211015_behavior.mat
4.01 MB
-
ny226_20211015_neural.mat
1.03 GB
-
ny228_20211013_behavior.mat
4.12 MB
-
ny228_20211013_neural.mat
439.32 MB
-
ny228_20211014_behavior.mat
4.17 MB
-
ny228_20211014_neural.mat
693.92 MB
-
ny228_20211015_behavior.mat
4.17 MB
-
ny228_20211015_neural.mat
1.01 GB
-
README.txt
4.12 KB
-
session_info.mat
3.18 KB
Abstract
As animals explore an environment, the hippocampus is thought to automatically form and maintain a place code by combining sensory and self-motion signals. Instead, we observed an extensive degradation of the place code when mice voluntarily disengaged from a virtual-navigation task, remarkably even as they continued to traverse the identical environment. Internal states therefore can strongly gate spatial maps and reorganize hippocampal activity even without sensory and self-motion changes.
Mice were trained in virtual reality to navigate a two meter-long linear track that repeated in a circular topology. Mice received liquid rewards if they licked a spout in a 20-cm long reward zone, whereas licks in other parts of the track were unrewarded. We measured the activity of hundreds of CA1 neurons using cellular-resolution calcium imaging with jRGECO1a or jGCaMP8m.
This dataset contains calcium imaging data from the hippocampus in mice performing a linear-track navigation task in virtual reality. Mice needed to lick within the reward zone on each trial to trigger water reward delivery, and we used licking behavior to analyze the spontaneous changes in their internal states and spatial representations while the task environment and reward contingencies remained constant.
The data include 39 sessions from 9 mice that express either jRGECO1a or jGCaMP8m.
All 39 sessions are from mice that exhibit good behavioral performance and satisfactory imaging quality. “session_info.mat” contains high-level information on the sessions and its content is described below.
- jrgeco_mice: a list of mice that were imaged with jRGECO1a.
- gcamp_mice: a list of mice that were imaged with jGCaMP8m.
- beha_im_sessions: a list of all 39 sessions that have good behavior and imaging.
- beha_im_clusters: cluster labels of all trials based on lick behavior (lick rate and selectivity; see Methods in paper) using kmeans clustering with 2 clusters. Note that these the cluster labels ONLY correspond to trials in valid_trial_N for each sessions and may therefore be smaller in size than the total number of trials saved.
- eng_clust_idx: cluster label that corresponds to trials with higher lick rate and selectivity.
- kmeans_include_sessions: a list of 32 sessions that include more than 10 trials of both clusters. These sessions are analyzed primarily in the paper.
- kmeans_include_cluster: same as beha_im_clusters, but only for the 32 sessions in kmeans_include_sessions.
Session names are in the format "mouse_yyyymmdd" and each session has one "_behavior.mat" file and one "_neural.mat" file.
The "_neural.mat" file for all sessions include the deconvolved and smoothed activity as “deconv_sm.” The 32 sessions in kmeans_include_sessions additionally include the raw ∆F/F activity as “dff.” These activity files have shape # neurons X # imaging frames (denoted as nFrames below).
The "_behavior.mat" file contains behavioral information on each imaging frame in “iter” and at the trial level in “trial”. “Valid_trial_N” denote the trials that are of normal duration (between 3 and 60 seconds) and excludes trials whose licking was not indicative of the mouse’s internal states (crutch trials where licks only occurred after the reward was delivered within the reward zone; see Methods).
See below for detailed descriptions.
*iter*
- rawMovement: [3×nFrames double] pitch; roll; yaw data from ball sensors
- position: [4×nFrames double] X,Y,Z,Theta (heading direction) in VR world. Note that X,Z,Theta are fixed. Units are Virmen Units. 2 Virmen units = 1 cm.
- velocity: [4×nFrames double] dX,dY,dZ,dTheta.
- tN: [1×nFrames double] trial number. corresponds to N in trial structure. Note that imaging may not be started right away after virtual reality is displayed and may also end earlier than virtual reality. Therefore, tN may only be a subset of N in trial structure.
- reward: [1×nFrames double] reward given on frame. Values in units of micro liter.
- isLick: [1×nFrames double] binary indicator of lick on each imaging frame
- isVisible: [1×nFrames double] binary indicator of whether the virtual world is visible or not. 0 means the world is dark and no visual cues are displayed.
- manualReward: [1×nFrames double] manual reward given
*trial*
- N: trial number.
- duration: trial duration in seconds.
- totalReward: total reward on that trial in micro liters.
- isProbe: whether the trial a probe trial (unrewarded regardless of behavior; see task description).
- isCrutch: whether trial is crutch trial (rewarded regardless of behavior).
- totalLicks: total number of licks on trial.
- fractionVisible: fraction of the trial in which the world is visible.
- meanSpeed: mean forward running speed on trial in VR units/s.