Larval salamander retinal population data in response to natural movies from the Chicago Motion Database
Data files
Dec 11, 2024 version files 1.95 GB
-
binaryCheckerboard.mat
33.04 MB
-
movieBinnedSpiking.mat
733.17 KB
-
MultipleMoviesStim_1_tree.avi
213.02 MB
-
MultipleMoviesStim_2_water.avi
426.04 MB
-
MultipleMoviesStim_3_grasses.avi
426.04 MB
-
MultipleMoviesStim_4_fish.avi
426.04 MB
-
MultipleMoviesStim_5_opticflow.avi
426.04 MB
-
README.md
2.94 KB
Abstract
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells, less is known about how populations form both flexible and reliable encoding in natural moving scenes. We record the larval salamander retina responding to five different natural movies, over many repeats and use these data to characterize the population code in terms of single-cell fluctuations in rate and pairwise couplings between cells. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the retinal output. while the single-cell activity adapts to different stimuli, the population structure captured in the sparse, strong couplings is consistent across natural movies as well as synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between retinal ganglion cells and amacrine cells.
README: Larval salamander retinal population data in response to natural movies from the Chicago Motion Database
https://doi.org/10.5061/dryad.4qrfj6qm8
Description of the data and file structure
This data is described in the 2024 paper "Stimulus-invariant aspects of the retinal code drive discriminability of natural scenes". If this data is used for a publication, please cite that paper.
The file contains the following files:
- MultipleMoviesStim_1_tree.avi
- MultipleMoviesStim_2_water.avi
- MultipleMoviesStim_3_grasses.avi
- MultipleMoviesStim_4_fish.avi
- MultipleMoviesStim_5_opticflow.avi
- movieBinnedSpiking.mat
- binaryCheckerboard.mat
All of the .avi files are the movie stimulus that was shown to the salamander retina as described in our publication. The file "binaryCheckerboard.mat" contains three variables: the samplingFreq (the frequency of the response and stimulus timescales measured in hertz), binaryCheckerboard (a 93x120289 matrix corresponding to the binarized neural response of each of the 93 cells to each frame of the checkerboard stimulus), and stimulusFrames (a 40x40x120289 matrix corresponding to the actual checkerboard stimulus). Specifically, the i'th frame of the stimulus is stimulusFrames(:,:,i) and the response of the j'th neuron to the i'th frame is binaryCheckerboard(j,i).
The file "movieBinedSpikes.mat" contains the following variables:
- movnames, which contains the names of each of the five movies in order
- ncell, the number of cells we took recordings from (93)
- nmov, the number of movies in the dataset (5)
- nreps, the number of reps for each of the movies
- samplingfreq, the sampling frequency of the data described here (Hz)
- binned, a matrix that contains the binarized responses from each cell to the movie stimuli. The data corresponding to the i'th movie is binned(1:nreps(i), :, :, i). The responses of the j'th neuron to the i'th movie are binned(1:nreps(i),:,j,i).
Files and variables
File: movieBinnedSpiking.mat
Description: MatLab file containing the binned, binary spikes from the population.
Variables
- see above
File: binaryCheckerboard.mat
Description: The stimulus used for receptive field mapping
Variables
- see above
File: MultipleMoviesStim_1_tree.avi
Description: natural movie of a tree blowing in the wind
File: MultipleMoviesStim_4_fish.avi
Description: natural movie of fish swimming in a tank with real plants
File: MultipleMoviesStim_5_opticflow.avi
Description: natural movie of a leafy woods scene, with a camera moving through the underbrush
File: MultipleMoviesStim_3_grasses.avi
Description: natural movie of a stand of tall grasses blowing in the wind
File: MultipleMoviesStim_2_water.avi
Description: natural movie of water flowing through a small canal
Methods
Neural data Voltage traces from the output, retinal ganglion cell layer of a larval tiger salamander retina were recorded following the methods outlined in O. Marre et al., Mapping a complete neural population in the retina. J. Neurosci. 32, 14859–14873 (2012). In brief, the retina was isolated in darkness and pressed against a 252-channel multielectrode array. Voltage recordings were taken during stimulus presentation of both natural movies and white noise stimuli and spike-sorted using an automated clustering algorithm that was hand-curated after initial template clustering and fits. This technique captured a highly overlapping neural population of 93 cells that fully tiled the recorded region of visual space. Spike times were binned at 16.667ms for all analyses presented.
Visual stimuli A white noise checkerboard stimulus (with binary white and black squares) was played at 30 frames per second (fps) for 30 minutes prior to and after the natural scene stimuli. Five different natural movies lasting 20 seconds were played in a pseudorandom order, and each was displayed a minimum of 80 times. The movies labeled tree, water, grasses, fish, and self-motion were repeated 83, 80, 84, 91, and 85 times, respectively. All natural scenes except for the tree stimulus were displayed at 60fps. The tree stimulus was updated at a rate of 30fps with each frame repeated twice to match the 60fps frame rate of the other movies.
In all movies, the cells significantly increase their firing rates in the first 200 ms following the switch to a new stimulus. This is followed by a rapid decay back to a baseline firing rate. This is likely due to a strong population response to abrupt changes in luminance within their receptive fields. In subsequent analysis, we exclude the first 500ms of every trial to isolate the more steady-state response of the retina to scene-specific features and dynamics.