Hippocampal place cell remapping occurs with memory storage of aversive experiences
Data files
Jul 26, 2023 version files 151.01 MB
-
Blair_et_al._2023_eLife_v2.zip
151 MB
-
README.md
11.85 KB
Abstract
Aversive stimuli can cause hippocampal place cells to remap their firing fields, but it is not known whether remapping plays a role in storing memories of aversive experiences. Here we addressed this question by performing in-vivo calcium imaging of CA1 place cells in freely behaving rats (n=14). Rats were first trained to prefer a short path over a long path for obtaining food reward, then trained to avoid the short path by delivering a mild footshock. Remapping was assessed by comparing place cell population vector similarity before acquisition versus after extinction of avoidance. Some rats received shock after systemic injections of the amnestic drug scopolamine at a dose (1 mg/kg) that impaired avoidance learning but spared spatial tuning and shock-evoked responses of CA1 neurons. Place cells remapped significantly more following remembered than forgotten shocks (drug-free versus scopolamine conditions); shock-induced remapping did not cause place fields to migrate toward or away from the shocked location and was similarly prevalent in cells that were responsive versus non-responsive to shocks. When rats were exposed to a neutral barrier rather than aversive shock, place cells remapped significantly less in response to the barrier. We conclude that place cell remapping occurs in response to events that are remembered rather than merely perceived and forgotten, suggesting that reorganization of hippocampal population codes may play a role in storing memories for aversive events.
Methods
MiniLFOV calcium imaging system
To record calcium activity during unrestrained behavior, we utilized a large field-of-view version of the Miniscope imaging system, “MiniLFOV” (Guo et al. 2023). This open-source epifluorescence imaging camera weighs 13.9 g with a 3.6 x 2.3 mm field of view while maintaining a 2.5 um resolution at the center of view and 4.4 um at the periphery. The MiniLFOV can record at 22 Hz, has a 5MP CMOS imaging sensor, an electrowetting lens for digitally setting the focal plane, and a modular lens configuration to enable a longer working distance (either 1.8 mm, used here, or 3.5 mm). The system is 20x more sensitive than previous v3 Miniscopes, and twice as sensitive as current v4 Miniscopes. Power, communication, and image data are packed into a flexible 50 Ω coaxial cable (CW2040-3650SR, Cooner Wire) using power-over-coax filtering and a serializer/deserializer pair for bi-directional control communication with I2C protocol and uni-directional high bandwidth data streaming. The MiniLFOV device interfaces with UCLA open-source Miniscope DAQ Software to stream, visualize and record neural dynamics and head orientation data. This DAQ platform and software allow for excitation intensity adjustment, focus change, image sensor gain selection, and frame rate setting. See the MiniLFOV website (www.github.com/Aharoni-Lab/Miniscope-LFOV) and methods paper (Guo et al. 2023) for further details and printable part files.
Trace extraction and spike inference
To accelerate image processing, each session’s video stack was cropped to a rectangle that excluded edge regions containing no neural activity in any session from that animal. The stack was also temporally downsampled to every other frame, yielding an effective sample rate of ~11 Hz. Non-rigid motion correction was applied to remove residual non-uniform tissue motion artifacts (Pnevmatikakis and Giovannucci 2017). Source extraction was then performed using the Python implementation of the Calcium Imaging Analysis package, ‘CaImAN’ (Giovannucci et al. 2019), yielding individual spatial contours and demixed temporal fluorescence traces for each detected neuron; parameters used for source extraction can be found in scripts provided at https://github.com/tadblair/tadblair/tree/Blair_et_al. Deconvolved spikes were derived from denoised fluorescence traces by CaImAN ‘s ‘deconvolveCa’ function using a second-order autoregressive model with automated estimation of baseline and convolution kernel parameters. To match cell contours across sessions, spatial contour weights from CaImAn were thresholded to 50% of their peak value to generate binary pixel masks which were then analyzed using CellReg (Sheintuch et al. 2017) for matching contours across all sessions included in the analysis for each rat.
Behavior tracking
A webcam mounted in the behavior room tracked a red LED located on the top of the miniscope and this video was saved alongside the calcium imaging via the miniscope software with synchronized frame timestamps. These behavior video files were initially processed by custom Python code, where all the session videos were concatenated together into one tiff stack, downsampled to 15 frames per second, the median of the stack was subtracted from each image, and finally they were all rescaled to the original 8-bit range to yield the same maximum and minimum values before subtraction. Background subtracted behavior videos were then processed in MATLAB. The rat’s position in each frame was determined using the location of the red LED on the camera. Extracted positions were then rescaled using an inverted model of the empirically measured camera distortion and converted the pixel position to centimeters according to the maze size. Positional information was then interpolated to the timestamps of the calcium imaging video.
Usage notes
MATLAB 2016 (or later) or Octave.