Skip to main content
Dryad logo

Dopamine activity in the tail of the striatum, DeepLabCut and MoSeq during novel object exploration


Watabe-Uchida, Mitsuko et al. (2022), Dopamine activity in the tail of the striatum, DeepLabCut and MoSeq during novel object exploration, Dryad, Dataset,


In this study, we characterized dynamics of novelty exploration using multi-point tracking (DeepLabCut) and behavioral segmentation (MoSeq). Mice were habituated in an arena, and then a object was placed at the corner of the arena. We compared 4 groups of mice: one with presentation of a novel object (stimulus novelty), one with a presentation of a familiar object (contextual novelty), one with presentation of a novel object after ablation of dopamine neuorns that project to the tail of the striatum (TS), and one with presentation of a novel object after sham surgery. With a separate group of mice, dopamine activity in TS was recorded during novelty exploration.


Novelty testing sessions consisted of animals exploring a single novel object within the behavioral arena. Object was placed in the corner of a behavioral arena (taped to floor to prevent animal from moving it, ~12-15cm from either wall). Sessions lasted for 25 minutes per animal per day for 4-12 days, and mice were run in the same order as habituation each day. One object was used per animal for duration of experiment and the objects were not shared between animals. Before each session, object would be submerged in soiled bedding (mixture of bedding from each mouse’s cage in current round, 6 animals) and wiped off with dry kimwipe to remove excess bedding dust. Objects were wiped with ethanol after each day and allowed to air out overnight before use.

For body part tracking, we used DeepLabCut version 1.0 (Mathis et al., 2018). Separate networks were used for different experimental settings: namely for mice without fiber implants (network A) and mice with fiber implants (network B). Both networks were run using a ResNet-50-based neural network (He et al., 2016; Insafutdinov et al., 2016) with default parameters for 1,030,000 training iterations. We provided manually labeled locations of four mouse body parts within video frames for training: nose, left ear base, right ear base, and tail base. For network A: We labeled 1760 frames taken from 64 videos. For network B: We labeled 540 frames taken from 17 videos. For both networks, 95% of labeled frames were then used for training.

Raw imaging data was collected from the depth camera, pre-processed (filtered, background subtracted, and parallax corrected), and submitted to a machine learning algorithm that evaluates the pose dynamics over time (Wiltschko et al., 2015). During video extraction (moseq2-extract), 900 frames were trimmed from the beginning of the video to correct for time between when video was started and when the mouse was placed in arena. During model learning (moseq2-model), a hyperparameter was set to the total number of frames in the training set (kappa=2,711,134, 52 sessions, 52 animals). This exceeds the recommended >=1 million frames (at 30 frames per second) needed to ensure quality MoSeq modeling.

DA sensor (green) and tdTomato (red) signals were collected as voltage measurements from current pre-amplifiers (SR570, Stanford Research Systems, CA). Green and red signals were cleaned by removing 60 Hz noise with bandstop FIR filter 58-62 Hz and smoothing with a moving average of signals in 50 ms. The global change within a session was normalized using a moving median of 100 s. Then, the correlation between green and red signals was examined by linear regression. If the correlation was significant (p<0.05), the fitted red signals were subtracted from green signals. Responses aligned at a behavioral event were calculated by subtracting the average baseline activity (-3s to -1s before the event) from the average activity of the target window (0-1s after the event).


National Institutes of Health, Award: U19NS113201

National Institutes of Health, Award: R01NS108740

National Institutes of Health, Award: R01MH125162

Simons Foundation, Award: Simons Collaboration on the Global Brain

Bipolar Disorder Seed Grant Program

Japan Society for the Promotion of Science

Harvard Molecules, Cells and Organisms