Data for: Ongoing and visually evoked activity in the zebrafish optic tectum and adjacent brain structures
Data files
Mar 13, 2023 version files 925.90 MB
-
Fish_1.mat
-
Fish_2.mat
-
Fish_3.mat
-
Fish_4.mat
-
Fish_5.mat
-
README.txt
Abstract
The ongoing activity of neuronal populations represents an internal brain state that influences how sensory information is processed to control behaviour. Conversely, external sensory inputs perturb network dynamics, resulting in lasting effects that persist beyond the duration of the stimulus. However, the relationship between these dynamics and circuit architecture and their impact on sensory processing, cognition and behaviour are poorly understood. By combining cellular-resolution calcium imaging with mechanistic network modelling, we aimed to infer the spatial and temporal network interactions in the zebrafish optic tectum that shape its ongoing activity and state-dependent responses to visual input. We showed that a simple recurrent network architecture, wherein tectal dynamics are dominated by fast, short range, excitation countered by long-lasting, activity-dependent suppression, was sufficient to explain multiple facets of population activity including intermittent bursting, trial-to-trial sensory response variability and spatially-selective response adaptation. Moreover, these dynamics also predicted behavioural trends such as selective habituation of visually evoked prey-catching responses. Overall, we demonstrate that a mechanistic circuit model, built upon a uniform recurrent connectivity motif, can estimate the incidental state of a dynamic neural network and account for experience-dependent effects on sensory encoding and visually guided behaviour.
Methods
Imaging data was acquired using a custom-built digitally scanned light-sheet microscope. The excitation path included a 488 nm laser source (OBIS, Coherent, Santa Clara, California), a pair of galvanometer scan mirrors (Cambridge Technology, Bedford, Massachusetts) and objective (Plan 4X, 4x/0.1 NA, Olympus, Tokyo, Japan). A water-immersion detection objective (XLUMPlLFLN, 20x/1.0 NA, Olympus), a tube lens (f=200 mm), two relay lenses (f=100 mm) in a 4f configuration, and sCMOS camera (Orca Flash 4.0, Hamamatsu, Hamamatsu, Japan) were used in the orthogonal detection path. For remote focusing (Fahrbach et al., 2013), an electrically tunable lens (ETL, EL-16-40-TC-VIS-20D, Optotune, Dietikon, Switzerland) was installed between the relay lenses, conjugate to the back focal plane of the objective. Volumes (375 x 410 x 75 μm) comprising 19 imaging planes spaced 4 μm apart, were acquired at 5 volumes/s. Each plane received laser excitation for 1 ms (duty cycle 9%) resulting in average laser power at sample of 12.4 μW. To keep the observed population of neurons in each plane in focus throughout long imaging sessions, we implemented an automatic correction for slow drift in the Z direction. At the beginning of the experiment we acquired two reference stacks, centred on two of the imaging planes, by incrementally biasing the Z scanning mirror and ETL in steps of 1.5% of their scan amplitude. During the course of the experiment, Z drift was estimated every 30 seconds by comparing recent images to these reference stacks, finding the reference images with the maximal XY cross-correlation, and averaging the two drift estimates. Every five minutes the Z scan mirror and ETL signals were biased to offset any detected drift, according to the average of the ten most recent Z drift estimates.
For functional imaging, larval zebrafish were mounted in a custom 3D printed chamber (SLS Nylon 12, 3DPRINTUK, London, United Kingdom) in 3% low-melting point agarose (Sigma-Aldrich, St. Louis, Missouri) at 5 dpf and allowed to recover overnight before functional imaging at 6 dpf. Visual stimuli were back-projected (ML750ST, Optoma, New Taipei City, Taiwan) onto a curved screen forming the wall of the imaging chamber in front of the animal, at a viewing distance of ~10 mm. A coloured filter (Follies Pink No. 344, Roscolux, Stamford, Connecticut) was placed in front of the projector to block green light from the collection optics. Visual stimuli were designed in Matlab (MathWorks, Natik, Massachusetts) using Psychophysics toolbox (Brainard, 1997). Stimuli comprised 10° dark spots on a bright magenta background, moving at 20°/s either left→right or right→left across ~110° of frontal visual space. Two or three elevation angles were used, calibrated for each fish during preliminary imaging by finding elevations separated by at least 15° that produce robust observable tectal activation (typically a very low elevation stimulus ~25° below the horizon, a low elevation stimulus ~10° below the horizon, and a high elevation stimulus ~5° above the horizon).
Eye movements were tracked during imaging experiments at 50 Hz under 850 nm illumination using a sub-stage GS3-U3-41C6NIR-C camera (Point Grey, Richmond, Canada). The angle of each eye was inferred online using a convolutional neural network (three 5x5 convolutional layers with 1, 1 and 4 channels, each followed by stride-2 max pooling layer, and a single fully connected layer), pre-trained by annotating images from multiple fish covering a wide range of eye positions. Eye movements were categorized as a convergent saccade if both eyes made nasally directed saccades within 150 ms of one another. Microscope control, stimulus presentation and behaviour tracking were implemented using LabVIEW (National Instruments, Austin, Texas) and Matlab.
All calcium imaging data analysis was performed using Matlab scripts. Volume motion correction was performed by 3D translation-based registration using the Matlab function ‘imregtform’, with gradient descent optimizer and mutual information as the image similarity metric. A registration template was generated as the time-average of the first 10 volumes and then iteratively updated following each block of 10 newly registered volumes from the first 500 frames. This template was then used to register all remaining volumes. For elavl3:H2B-GCaMP6s experiments, 2D regions of interest (ROIs) corresponding to cell nuclei were computed from each template imaging plane using the cell detection code provided by (Kawashima et al., 2016). For RGC axonal arbor imaging, two ROIs encompassing the tectal neuropil were manually defined for each imaging plane. The time-varying raw fluorescence signal Fraw(t) for each ROI was extracted by computing the mean value of all pixels within the ROI mask at each time-point. A slowly varying baseline fluorescence F0(t) was estimated by taking the 10th percentile of a sliding 20-volume window, and was used to calculate the proportional change in fluorescence.
These values were subsequently zero-centred by subtracting the mean for each ROI, and ROIs with a slow drift in their baseline fluorescence (for which the standard deviation of mean-normalised F0(t) was greater than 0.45) were discarded.
To estimate spike trains, the zero-centred time series for each cell was deconvolved using a thresholded OASIS (Friedrich et al., 2017) with a first-order autoregressive model (AR(1)) and an automatically-estimated transient decay time constant for each ROI.
To standardize the 3D coordinates of detected cell nuclei, template volumes were registered onto the Tg(elavl3:H2B-RFP) reference brain in the ZBB brain atlas (Marquart et al., 2017) using the ANTs toolbox version 2.1.0 (Avants et al., 2011) with affine and warp transformations. As an example, to register the 3D image volume in ‘fish1_01.nrrd’ to the reference brain ‘ref.nrrd’, the following parameters were used:
antsRegistration -d 3 -float 1 -o [fish1_, fish1_Warped.nii.gz] -n BSpline -r [ref.nrrd, fish1_01.nrrd, 1] -t Rigid[0.1] -m GC[ref.nrrd, fish1_01.nrrd, 1, 32, Regular, 0.25] -c [200×200×200×0,1e-8, 10] -f 12×8×4×2 -s 4×3×2×1 -t Affine[0.1] -m GC[ref.nrrd, fish1_01.nrrd, 1, 32, Regular, 0.25] -c [200×200×200×0,1e-8,10] -f 12×8×4×2 -s 4×3×2×1 -t SyN[0.1,6,0] -m CC[ref.nrrd, fish1_01.nrrd, 1, 2] -c [200×200×200x200×10,1e-7,10] -f 12×8×4x2×1 -s 4×3×2x1×0
Following registration, tectal ROIs were labelled using a manually created 3D mask.