Data from: Awake responses suggest inefficient dense coding in the mouse retina
Data files
Oct 05, 2023 version files 4.88 GB
-
Boissonnet_etal2023.zip
4.88 GB
-
README.md
4.76 KB
Abstract
The structure and function of the vertebrate retina have been extensively studied across species with an isolated, ex vivo preparation. Retinal function in vivo, however, remains elusive, especially in awake animals. Here we performed single-unit extracellular recordings in the optic tract of head-fixed mice to compare the output of awake, anesthetized, and ex vivo retinas. While the visual response properties were overall similar across conditions, we found that awake retinal output had in general 1) faster kinetics with less variability in the response latencies; 2) a larger dynamic range; and 3) higher firing activity, by ~20 Hz on average, for both baseline and visually evoked responses. Our modeling analyses further showed that such awake response patterns convey comparable total information but less efficiently, and allow for a linear population decoder to perform significantly better than the anesthetized or ex vivo responses. These results highlight distinct retinal behavior in awake states, in particular suggesting that the retina employs dense coding in vivo, rather than sparse efficient coding as has been often assumed from ex vivo studies.
README: Awake responses suggest inefficient dense coding in the mouse retina
Hiroki Asari
asari@embl.it
2023-Oct-05
This repository contains all the data files and codes used in the following paper:
Boissonnet T, Tripodi M, and Asari H (2023)
Awake responses suggest inefficient dense coding in the mouse retina
bioRxiv 2022.02.15.480512
doi: https://doi.org/10.1101/2022.02.15.480512
If you have any question, please address correspondence to:
Hiroki Asari, PhD
Epigenetics and Neurobiology Unit (EMBL Rome)
European Molecular Biology Laboratory
Via Ramarini 32,
00015 Monterotondo, Italy
tel: +39 06 90091439
email: asari@embl.it
web: https://www.embl.org/groups/asari/
INSTALLATION
Download and install theonerig in a conda env (https://github.com/Tom-TBT/theonerig).
Within the downloaded folder, in a terminal, run:
conda create -n tor python=3.6
conda activate tor
pip install packaging
pip install -e .
FILE DESCRIPTION AND USAGE
DATA
-> cells_dataframe.csv
Pandas dataframe generated within "0 - Dataframe generation.ipynb"
See "cells_dataframe_keys.txt" and "Read df examples.ipynb"
for the dataframe key description and data retreaval.
-> unique_cells.csv
Indices of cells selected manually in each recording.
-> chirp_stim_trace
This directory contains stimulus intensity data for
each "chirp" stimulus type in NumPy binary format:
- chirp_am_intensity.npy: Chirp-AM stimulus
- chirp_am_old_intensities.npy: Chirp-AM stimulus (old version)
- chirp_fe_intensities.npy: Chirp-FM stimulus
- chirp_fm_old_intensities.npy: Chirp-FM stimulus (old version)
See methods of the associated publication for details.
-> in_vitro_data
This directory contains in vitro RGC recording data sets, reorganized from:
Vlasiuk and Asari (2021) Feedback from retinal ganglion cells to the inner retina
doi: 10.5281/zenodo.5057577
See "ReadMe.txt" in the directory for details.
-> in_vivo_data
This directory contains in vivo optic tract recording data sets,
collected in the associated study:
- awake: awake chronic recordings
- fmm: recordings under FMM anesthesia
- isoflurane: recordings under isoflurane anesthesia
- new: awake acute recordings
The data for each recording is stored as "Record_Master" object in HDF5 format,
including the followings:
- main_tp: main time points in samples (to which each data is aligned)
- S_matrix: N-by-M matrix containing spikes trains of N-cells for M-frames
- signals: stimulus marker (photodiode) signals
- checkerboard: X-by-Y-by-T matrix containing presented stimulus intensity data (X-by-Y checkers for T frames)
- chirp_am, chirp_freq_epoch, fullfield_flicker: presented stimulus intensity data
- moving_gratings: moving grating parameters [bar widths, angle, moving speed]
- eye_tracking: eye tracking data [X-center, Y-center, X-diameter, Y-diameter, rotation]
- treadmill: treadmilll data (-10 to 10 V, where negative value for running forward)
See "0 - Dataframe generation.ipynb" and documentions of "theonerig"
(https://github.com/Tom-TBT/theonerig) for data import and processing examples.
-> individual_cells_pdfs
This directory contains PDF files summarizing the cells' responses,
generated within "1 - Units PDF export.ipynb"
-> matlab
This directory contains M- and MAT-files for running the analyses
in MATLAB for Figures 7 and 8 and Supplementary Figures 1-4.
- Fig7_Information.m, Fig7_Information.mat: for Figure 7
- Fig8_decoding.m, Fig8_decoding.mat: for Figure 8
- SFig1_behavior.m, SFig1_behavior.mat: for Supplementary Figure 1
- SFig2_correlation.m, SFig2_correlation.mat: for Supplementary Figure 2
- SFig3_BatchEffect.m, SFig3_BatchEffect.mat: for Supplementary Figure 3
- SFig4_coding.m, SFig4_coding.mat: for Supplementary Figure 4
USAGE
Run the Jupyter notebooks in the numerical order to reproduce the analysis results
in the associated publication. See comments inside the Jupyter notebooks for details.
-> masks.py, shared_functions.py, new_functions.py
Python scripts with code shared among the Jupyter notebooks.
-> 0 - Dataframe generation.ipynb
Generates the dataframe from in vitro and in vivo data by processing the response of
individual cells to stimuli, fit those response and compute various indexes.
-> 1 - Units PDF export.ipynb
Export individual cells' response plots in a PDF format.
-> 2 - Units proportions.ipynb
Run analyses for figure 2.
-> 3 - tSTA analysis.ipynb
Run analyses for figures 5 and 6.
-> 4 - Chirp amplitude modulation modelling.ipynb
Run analyses for figure 3.
-> 5 - Chirp amplitude modulation modelling.ipynb
Run analyses for figure 4.
-> Read df examples.ipynb
Examples on how to explore "cells_dataframe.csv"
Methods
We performed single-unit extracellular recordings in the optic tract of head-fixed mice under anesthetized or awake conditions and monitored the retinal output responses to visual stimuli in vivo. For full details, please see the methods in the associated article.