Skip to main content
Dryad

Datasets for neuronal imaging, extracellular recordings and behavioral rig code

Cite this dataset

Telias, Michael; Sit, Kevin (2022). Datasets for neuronal imaging, extracellular recordings and behavioral rig code [Dataset]. Dryad. https://doi.org/10.5061/dryad.xsj3tx9gx

Abstract

Rod and cone photoreceptors degenerate in retinitis pigmentosa (RP). While downstream neurons survive, they undergo physiological changes, including accelerated spontaneous firing in retinal ganglion cells (RGCs). Retinoic acid (RA) is the molecular trigger of RGC hyperactivity, but whether this interferes with visual perception is unknown. Here we show that inhibiting RA synthesis with disulfiram, a deterrent of human alcohol abuse, improves behavioral image detection in vision-impaired mice. In vivo Ca2+ imaging shows that disulfiram sharpens orientation-tuning of visual cortical neurons and strengthens fidelity of responses to natural scenes. An RA receptor inhibitor also reduces RGC hyperactivity, sharpens cortical representations, and improves image detection. These findings suggest that photoreceptor degeneration is not the only cause of vision loss in RP. RA-induced corruption of retinal information processing also degrades vision, pointing to RA synthesis and signaling inhibitors as potential therapeutic tools for improving sight in RP and other retinal degenerative disorders.

Methods

RAR Reporter Imaging Assay

Live retinal pieces mounted on nitrocellulose paper were maintained under oxygenated ACSF perfusion at 34℃. A spinning-disk confocal microscope (Olympus IX-50) with a 40x water-submersible objective was used for fluorescence imaging to detect red or green fluorescence owing to RFP or GFP expression. 1.5 μm-thick optical sections of the ganglion cell layer of the retina were compiled to generate Z-stacks. Z-stacks were flattened and analyzed with ImageJ (NIH). Two to 3 fields of view were analyzed for each retinal piece and individual regions of interest (ROIs) were drawn around every visible cell body, enabling measurement of mean grey value (MGV) for both RFP and GFP fluorescence. The GFP/RFP ratio was calculated for each ROI and averaged across retinal pieces (individual data points) and mice (mean value).

MEA

Retinas were dissected and maintained in ACSF as described previously. Individual pieces of retina were placed ganglion cell layer down onto an array with 60 electrodes spaced 200 μm apart (1060-2- BC, Multi-Channel Systems). After mounting, each retinal piece was dark-adapted for 30 minutes under constant perfusion of 34°C oxygenated ACSF. Extracellular signals were digitized at 20 kHz and passed through a 200 Hz high-pass 2nd order Butterworth recursive filter. Spikes were extracted using a threshold voltage of 4 SD from the median background signal of each channel. Spikes were then aligned and clustered primarily in 3D principal component space using T-Distribution Expectation-Maximization (Offline Sorter, Plexon). Inclusion criteria for units included distinct depolarization and hyperpolarization phases, inter-spike interval histograms with peak values, and at least 50 contributing spikes. Exclusion criteria included multiple peaks, high noise, and low amplitude in channels with more than 3 detected units.

Cortical Imaging

Visual stimuli

All visual stimuli were generated with a Windows PC using MATLAB and the Psychophysics toolbox (67). Visual stimuli were presented on two LCD monitors that can display visual images to either eye independently. Each monitor (17.5 × 13 cm, 800 × 600 pixels, 60 Hz refresh rate) was positioned symmetrically 5 cm from each eye at a 30° angle right of the midline, spanning 120° (azimuth) by 100° (elevation) of visual space. The monitors were located 3 cm above 0° elevation and tilted 20° downward. A nonreflective drape was placed over the inactive monitor to reduce reflections from the active monitor.

Orientation tuning was measured with drifting sine wave gratings (spatial frequency 0.05 cycles/deg; temporal frequency: 2Hz) presented in one of 12 directions, spanning from 0 degrees to 330 degrees in 30 degree increments. For a single repeat, each grating was presented once for 2 seconds with a luminance matched 4 second inter-trial gray screen between presentations.  This was repeated for 8 repetitions per session.

Natural scenes stimuli consisted of 900 frames from Touch of Evil (Orson Wells, Universal Pictures, 1958) presented at 30 frames per second, leading to a 30 second presentation time per repeat with a 5 second inter-trial gray screen. The clip consists of a single continuous scene with no cuts, as has been previously described and is commonly used as a visual stimulus (https://observatory.brain-map.org/visualcoding/stimulus/natural_movies). Each presentation was repeated 30 times, with a 5 second inter-trial gray screen.

Two-photon imaging

After >2 weeks recovery from surgery, GCaMP6s fluorescence was imaged using a Prairie Investigator two-photon microscopy system with a resonant galvo-scanning module (Bruker). For fluorescence excitation, we used a Ti:Sapphire laser (Mai-Tai eHP, Newport) with dispersion compensation (Deep See, Newport) tuned to λ = 920 nm. For collection, we used GaAsP photomultiplier tubes (Hamamatsu). To achieve a wide field of view, we used a 16X/0.8 NA microscope objective (Nikon) at 1X (850 × 850 μm) or 2X (425 × 425 μm) magnification. Laser power ranged from 40 to 75 mW at the sample depending on GCaMP6s expression levels. Photobleaching was minimal (<1%/min) for all laser powers used. A custom stainless-steel light blocker (eMachineShop.com) was mounted to the head plate and interlocked with a tube around the objective to prevent light from the visual stimulus monitor from reaching the PMTs. During imaging experiments, the polypropylene tube supporting the mouse was suspended from the behavior platform with high tension springs (Small Parts) to reduce movement artifacts.

Two-photon post-processing

Images were acquired using PrairieView acquisition software and converted into TIF files. All subsequent analyses were performed in MATLAB (Mathworks) using custom code (https://labs.mcdb.ucsb.edu/goard/michael/content/resources). First, images were corrected for X–Y movement by registration to a reference image (the pixel-wise mean of all frames) using 2-dimensional cross correlation

To identify responsive neural somata, a pixel-wise activity map was calculated using a modified kurtosis measure. Neuron cell bodies were identified using local adaptive threshold and iterative segmentation. Automatically defined ROIs were then manually checked for proper segmentation in a graphical user interface (allowing comparison to raw fluorescence and activity map images). To ensure that the response of individual neurons was not due to local neuropil contamination of somatic signals, a corrected fluorescence measure was estimated according to:

Fcorrected (n) = Fsoma (n) -α (Fneuropil(n) -Fneuropil)

where Fneuropil was defined as the fluorescence in the region <30 μm from the ROI border (excluding other ROIs) for frame n and α was chosen from [0 1] to minimize the Pearson’s correlation coefficient between Fcorrected and Fneuropil. The ΔF/F for each neuron was then calculated as:

∆F/F(n)= Fn- F0F0

where Fn is the corrected fluorescence (Fcorrected) for frame n and F0 defined as the first mode of the corrected fluorescence density distribution across the entire time series.

 Analysis of two-photon imaging data

Blinding to experimental condition

For disulfiram experiments, disulfiram-containing chow or control chow of the same composition (Dyets, Inc) were given neutral codes by an investigator not involved in the study and administered to the mice with the experimenter blind to experimental condition. For BMS-493 experiments, vials of drug and vehicle were given neutral codes and each solution was used for one eye. In both cases, the experimental condition was revealed only after the primary analysis was complete.

Orientation tuning

Neural responses to the orientation tuning stimulus were first separated into trials, each containing the response of the neuron across all tested queried orientations. For each neuron, we averaged the baseline-subtracted responses to each orientation, creating an orientation tuning curve for each trial. To calculate the orientation selectivity index (OSI) in a cross-validated manner, we first separated the orientation tuning curves into even and odd trials. We then aligned the even trials using the maximal response of the averaged odd trial for each neuron, and vice versa, resulting in aligned responses. We then calculated the OSI from the averaged tuning curves for each neuron using the following equation:

OSI = Rpref - Rpref+πRpref + Rpref+π

where Rpref is the neuron’s average response at its preferred orientation, defined by cross-validation on a different set of trials using the above procedure. Aligning the orientation tuning curves of the neurons using cross-validation provides a more accurate measurement of the orientation tuning of the neuron, as it prevents non-selective neurons from having high OSI values due to spurious neural activity.

Naturalistic movies reliability

Neural responses to the naturalistic movies were first separated into trials, with each trial containing the full response of the neuron to the entire presented movie. The reliability of each neuron to the naturalistic movie was calculated as follows:

Rc = t=1TCCrc,t , rc,1 T≠tT

 

where R is the reliability for cell c, t is the trial number from 1 T, CC is the Pearson correlation coefficient, rc,t is the response of cell c on trial t and rc,1 T≠t is the average response of cell c on all trials excluding trial t. To separate neurons by reliability deciles, we independently calculated the decile cutoffs for each condition, then binned neurons into their respective deciles.

Naturalistic movies decoding analyses

To decode naturalistic movie responses from the population data, we first randomly selected neurons of a given pool size (pool sizes: 2, 4, 8, 16, 32, 64, 128, 256, or 512 neurons) from across all recordings. The neural responses to natural movies within that pool were then divided into even and odd trials. The average population activity across even trials was used to calculate a “template” population vector for each frame of the movie. We then estimated the movie frame (FDecoded) from the population activity of each actual frame (FActual). To accomplish this, we calculated the population vector from the odd trials during FActual and compared to the “template” population vectors (even trials) for all of the frames. The frame with the smallest Euclidean distance between population vectors was chosen as the decoded frame (FDecoded). This process was repeated for each frame (FActual) of the movie. For each pool size of neurons used, the entire procedure was iterated 1000 times, picking new neurons for each iteration. This resulted in a confusion matrix that describes the similarity of neural activity patterns for each frame between non-overlapping trials. To assess decoder performance, we measured the percentage of decoded frames that fell within 10 frames of the actual frame (chance level = 7%).

Statistical Analysis

For data shown in Figures 1, 2, 6, 7 and 8, we employed generalized linear mixed effects models (gLMEM) to account for the individual differences between mice in our statistical analyses. The formula for these models is as follows:

y = Xβ + Zb + ε

where y is the response vector,  X is the fixed-effects design matrix (denoting treatment condition), β is the fixed effects vector, Z is the random-effects design matrix (denoting different mice), b is the random effects vector, and ε is the observation error vector. After fitting the models for each experiment, we performed F-tests on the appropriate contrasts to determine significance. For data presented in Figure 2D and 2H, significance between cumulative probabilities was tested using the Kolmogorov–Smirnov test. For data presented in Figures 3, 4, 5 and Supplemental Figures 1 and 3, comparison between groups used non-parametric tests (Mann-Whitney U test for independent samples or Wilcoxon sign test for paired data), unless the data passed the normality test (Shapiro-Wilk) and could be analyzed with parametric tests (two-tailed t-test).

Usage notes

Please note that all the '.mat' files are to be used conjunctively with the provided matlab scripts. All data in the matfiles are stored in proprietary data formats that requires the accompanying matlab scripts to read them.