Skip to main content
Dryad

Asymmetric retinal direction tuning predicts optokinetic eye movements across stimulus conditions

Cite this dataset

Harris, Scott; Dunn, Felice (2023). Asymmetric retinal direction tuning predicts optokinetic eye movements across stimulus conditions [Dataset]. Dryad. https://doi.org/10.7272/Q6RV0KZ3

Abstract

Across species, the optokinetic reflex (OKR) stabilizes vision during self-motion. OKR occurs when ON direction-selective retinal ganglion cells (oDSGCs) detect slow, global image motion on the retina. How oDSGC activity is integrated centrally to generate behavior remains unknown. Here, we discover mechanisms that contribute to motion encoding in vertically-tuned oDSGCs and leverage these findings to empirically define signal transformation between retinal output and vertical OKR behavior. We demonstrate that motion encoding in vertically-tuned oDSGCs is contrast-sensitive and asymmetric for oDSGC types that prefer opposite directions. These phenomena arise from the interplay between spike threshold nonlinearities and differences in synaptic input weights, including shifts in the balance of excitation and inhibition. In behaving mice, these neurophysiological observations, along with a central subtraction of oDSGC outputs, accurately predict the trajectories of vertical OKR across stimulus conditions. Thus, asymmetric tuning across competing sensory channels can critically shape behavior. Available here are the data associated with these findings.

Methods

The data included here come from the following sources:

  1. Behavior: eye tracking data from mice performing the vertical optokinetic reflex at multiple stimulus contrasts.
  2. Electrophysiology: recorded in mouse ex vivo retinas, from retinal ganglion cells that project to the medial terminal nucleus (vertically-tuned ON direction-selective retinal ganglion cells). Cell-attached, voltage-clamp, and current-clamp data are available. Stimuli include a bar drifting in 8 directions (10 deg/s), oscillating gratings, and full field light increments.
  3. Imaging: results from both confocal and widefield imaging experiments of mouse retinal ganglion cells that project to the contralateral medial terminal nucleus.

The data are broken up into smaller datasets based on experiment type (e.g. behavior, cell-attached, voltage-clamp, etc.). A "Format" .HTML file detailing how the data are organized exists for each of these smaller datasets. In most cases, it is possible to pool data across the smaller datasets (e.g. for data that come from the same cell). As a general note, the data are largely unprocessed: behavior data contains traces of angular eye position across time; cell-attached data contain the time stamps of individual action potentials; voltage-clamp data are completely unprocessed time series taken directly from amplifier output; Current-clamp data include isolated spikes and underlying membrane potential; imaging data contain summaries of metrics of interest.

Also available are two computational models: a leaky integrate and fire model ("conductance model") and a model for generating spatial distributions of retinal ganglion cell mosaics. Each model also has a Format .HTML file which details how to use it.

See the Methods section of the manuscript for further details on experimental design and relevant analyses. See the README.html file for more information on working with the datasets.

Usage notes

All data are .mat files and all code are .m files. Both can be accessed using MATLAB, GNU Octave, or various other open-source tools that can read structure data from .mat files. The Format files are .html and can be opened with any web browser. See the README.html file for more information on working with the datasets.

Funding

National Eye Institute, Award: F31EY033225

McKnight Foundation

Research to Prevent Blindness

National Eye Institute, Award: R01EY029772

National Eye Institute, Award: R01EY030136

National Eye Institute, Award: P30EY002162