Dynamics of gaze control during prey capture in freely moving mice
Michaiel, Angie; Abe, Elliott; Niell, Cristopher (2020), Dynamics of gaze control during prey capture in freely moving mice, Dryad, Dataset, https://doi.org/10.5061/dryad.8cz8w9gmw
Most studies of visual processing are conducted under head- and gaze-restricted conditions. While this provides experimental control, it radically limits the natural exploration of the visual world which is typically achieved through directed eye, head, and body movements. As such, less is known about how animals naturally sample the external visual world to acquire relevant visual information in natural contexts. To determine how mice target their gaze and sample the visual world during natural behavior, we measured head and bilateral eye movements in mice performing prey capture, an ethological behavior that engages vision. We find that most eye movements are compensatory for head movements but that non-compensatory movements occur during head turns. Importantly, we find that non-compensatory gaze shifts (i.e., saccades) do not target a discreet location in visual space (e.g., the prey location), but that orienting movements are driven by the head and work to sequentially shift and recenter the visual field. Data shared here include simultaneous recordings of eye and head movements from 105 trials of prey capture behavior across 7 animals. All data are available as .mat files.
In these experiments freely behaving mice (3 males, 4 females) hunted live crickets in a rectangular plexiglass arena. Mice were equipped with two miniaturized cameras that recorded eye movements from each eye and an accelerometer coupled with a gyroscope that recorded head movements in three dimensions. Additionally, an overhead camera recorded the behavior of the mouse and cricket.
Data from the two eyes was collected at 30 fps (frames per second) in NTSC format. This format is an interlaced format in which two sequential images acquired at 60fps are interdigitized into each frame on alternate horizontal lines. We de-interlaced the video in order to restore the native 60fps by separating out alternate lines of each image. Then, we linearly downsampled the resolution by a factor of two in order to match spatial resolution in horizontal and vertical dimensions.
DeepLabCut was used to extract 8 points around the pupil from both eye tracking recordings. An ellipse was fit to these 8 points and the rotation of this ellipse was converted into angular rotations. DeepLabCut was also used to extract 8 points on the head of the mouse (nose, center of left camera implant, end of left camera implant, left ear, center of right camera implant, end of right camera implant, right ear, center of back of head) and two points on the cricket (head and body). The 8 points on the mouse's head were used to compute the horizontal angle of the mouse's head as well of the azimuth of the mouse's head relative to the cricket.
Approaches were defined as times at which the velocity of the mouse was greater than 1cm/sec, the azimuth of the mouse relative to the cricket was between -45 and 45 degrees, and distance to the cricket was decreasing at a rate greater than 10 cm/sec.
DeepLabCut occassionally produced errors in estimating points on the eyes or head of the mouse or cricket. DeepLabCut returns these points with low-likelihood values. Rather than analyzing these faulty points, we have set them to NaNs.
National Institutes of Health, Award: R34NS111669