Skip to main content
Dryad

Data for: Hippocampal place codes are gated by behavioral engagement

Cite this dataset

Pettit, Noah; Yuan, Xintong; Harvey, Christopher (2022). Data for: Hippocampal place codes are gated by behavioral engagement [Dataset]. Dryad. https://doi.org/10.5061/dryad.2280gb5tx

Abstract

As animals explore an environment, the hippocampus is thought to automatically form and maintain a place code by combining sensory and self-motion signals. Instead, we observed an extensive degradation of the place code when mice voluntarily disengaged from a virtual-navigation task, remarkably even as they continued to traverse the identical environment. Internal states therefore can strongly gate spatial maps and reorganize hippocampal activity even without sensory and self-motion changes.

Methods

Mice were trained in virtual reality to navigate a two meter-long linear track that repeated in a circular topology. Mice received liquid rewards if they licked a spout in a 20-cm long reward zone, whereas licks in other parts of the track were unrewarded. We measured the activity of hundreds of CA1 neurons using cellular-resolution calcium imaging with jRGECO1a or jGCaMP8m. 

Usage notes

This dataset contains calcium imaging data from the hippocampus in mice performing a linear-track navigation task in virtual reality. Mice needed to lick within the reward zone on each trial to trigger water reward delivery, and we used licking behavior to analyze the spontaneous changes in their internal states and spatial representations while the task environment and reward contingencies remained constant. 

The data include 39 sessions from 9 mice that express either jRGECO1a or jGCaMP8m. 

All 39 sessions are from mice that exhibit good behavioral performance and satisfactory imaging quality. “session_info.mat” contains high-level information on the sessions and its content is described below. 

  • jrgeco_mice: a list of mice that were imaged with jRGECO1a.
  • gcamp_mice: a list of mice that were imaged with jGCaMP8m.
  • beha_im_sessions: a list of all 39 sessions that have good behavior and imaging.
  • beha_im_clusters: cluster labels of all trials based on lick behavior (lick rate and selectivity; see Methods in paper) using kmeans clustering with 2 clusters. Note that these the cluster labels ONLY correspond to trials in valid_trial_N for each sessions and may therefore be smaller in size than the total number of trials saved. 
  • eng_clust_idx: cluster label that corresponds to trials with higher lick rate and selectivity.
  • kmeans_include_sessions: a list of 32 sessions that include more than 10 trials of both clusters. These sessions are analyzed primarily in the paper.
  • kmeans_include_cluster: same as beha_im_clusters, but only for the 32 sessions in kmeans_include_sessions. 

Session names are in the format "mouse_yyyymmdd" and each session has one "_behavior.mat" file and one "_neural.mat" file. 

The "_neural.mat" file for all sessions include the deconvolved and smoothed activity as “deconv_sm.” The 32 sessions in kmeans_include_sessions additionally include the raw ∆F/F activity as “dff.” These activity files have shape # neurons X # imaging frames (denoted as nFrames below).

The "_behavior.mat" file contains behavioral information on each imaging frame in “iter” and at the trial level in “trial”. “Valid_trial_N” denote the trials that are of normal duration (between 3 and 60 seconds) and excludes trials whose licking was not indicative of the mouse’s internal states (crutch trials where licks only occurred after the reward was delivered within the reward zone; see Methods).

See below for detailed descriptions. 

*iter*

  • rawMovement: [3×nFrames double] pitch; roll; yaw data from ball sensors
  • position: [4×nFrames double] X,Y,Z,Theta (heading direction) in VR world. Note that X,Z,Theta are fixed. Units are Virmen Units.  2 Virmen units = 1 cm.
  • velocity: [4×nFrames double] dX,dY,dZ,dTheta. 
  • tN: [1×nFrames double] trial number. corresponds to N in trial structure. Note that imaging may not be started right away after virtual reality is displayed and may also end earlier than virtual reality. Therefore, tN  may only be a subset of N in trial structure. 
  • reward: [1×nFrames double] reward given on frame. Values in units of micro liter. 
  • isLick: [1×nFrames double] binary indicator of lick on each imaging frame 
  • isVisible: [1×nFrames double] binary indicator of whether the virtual world is visible or not. 0 means the world is dark and no visual cues are displayed. 
  • manualReward: [1×nFrames double] manual reward given

*trial*

  • N: trial number.
  • duration: trial duration in seconds. 
  • totalReward: total reward on that trial in micro liters.
  • isProbe: whether the trial a probe trial (unrewarded regardless of behavior; see task description).
  • isCrutch: whether trial is crutch trial (rewarded regardless of behavior). 
  • totalLicks: total number of licks on trial.
  • fractionVisible: fraction of the trial in which the world is visible. 
  • meanSpeed: mean forward running speed on trial in VR units/s.