Data from: Asymmetric distribution of color-opponent response types across mouse visual cortex supports superior color vision in the sky
Data files
Jul 30, 2024 version files 47.72 MB
-
data_mesopic_high.csv
12.67 MB
-
data_mesopic_low.csv
9.59 MB
-
data_photopic.csv
11.92 MB
-
labels_photopic.csv
995.64 KB
-
pca_photopic.csv
12.55 MB
-
README.md
2.44 KB
Abstract
Color is an important visual feature that informs behavior, and the retinal basis for color vision has been studied across various vertebrate species. While many studies have investigated how color information is processed in visual brain areas of primate species, we have limited understanding of how it is organized beyond the retina in other species, including most dichromatic mammals. In this study, we systematically characterized how color is represented in the primary visual cortex (V1) of mice. Using large-scale neuronal recordings and a luminance and color noise stimulus, we found that more than a third of neurons in mouse V1 are color-opponent in their receptive field center, while the receptive field surround predominantly captures luminance contrast. Furthermore, we found that color-opponency is especially pronounced in posterior V1 that encodes the sky, matching the statistics of natural scenes experienced by mice. Using unsupervised clustering, we demonstrate that the asymmetry in color representations across cortex can be explained by an uneven distribution of green-On/UV-Off color-opponent response types that are represented in the upper visual field. Finally, a simple model with natural scene-inspired parametric stimuli shows that green-On/UV Off color-opponent response types may enhance the detection of "predatory"-like dark UV-objects in noisy daylight scenes.The results from this study highlight the relevance of color processing in the mouse visual system and contribute to our understanding of how color information is organized in the visual hierarchy across species.
https://doi.org/10.5061/dryad.rv15dv4hb
This repository contains the data used in the paper by Franke et al., eLife 2024 (see here), formatted as pandas DataFrames. The data was exported from a DataJoint database (more information on DataJoint available here). Additionally, the repository includes the code that defines all the database tables from which the data was fetched.
Description of the data and file structure
- Spike-triggered averages (STAs): The files
data_photopic.csv
,datamesopiclow.csv
, anddatamesopichigh.csv
are DataFrames exported from pandas as CSV files. They contain the STAs of all neurons that passed the quality threshold outlined in the paper for three different light intensities. Each entry is uniquely defined by itsanimalid
,session
,scanidx
, andunitid
. For each unit, the STAs are stored as a numpy array inkernels_processed
, with 240 points corresponding to 4 x 60 points per stimulus condition, sorted as center-green, center-UV, surround-green, and surround-UV. - PCA weights: The file
pca_photopic.csv
is a DataFrame exported using pandas as a CSV file. For the same neurons found indata_photopic.csv
, it contains the PCA weights of the neurons and the PC-reconstructed STAs. The PCA weights were used for Gaussian Mixture Model (GMM) clustering. - GMM labels: The file
labels_photopic.csv
is a DataFrame exported using pandas as a CSV file. For the same neurons found indata_photopic.csv
, it contains the cluster labels obtained from GMM clustering of the PCA weights, as well as the probability for each of the 17 clusters.
The mouse retina data we analyzed has been published previously:
Code/Software
Data analysis and organization were performed using DataJoint in Python. The data available in this repository were exported from tables within the schema “mouse_color_cs_flicker”. The tables of this schema are defined in the corresponding .py
file available in this repository. The table definitions, including the make
functions, provide detailed information about the code used for data analysis.