Skip to main content
Dryad

Dopamine activity in the tail of the striatum, DeepLabCut and MoSeq during novel object exploration

Cite this dataset

Watabe-Uchida, Mitsuko et al. (2023). Dopamine activity in the tail of the striatum, DeepLabCut and MoSeq during novel object exploration [Dataset]. Dryad. https://doi.org/10.5061/dryad.41ns1rnh2

Abstract

In this study, we characterized dynamics of novelty exploration using multi-point tracking (DeepLabCut) and behavioral segmentation (MoSeq). Mice were habituated in an arena, and then a object was placed at the corner of the arena. We compared 4 groups of mice: one with presentation of a novel object (stimulus novelty), one with a presentation of a familiar object (contextual novelty), one with presentation of a novel object after ablation of dopamine neuorns that project to the tail of the striatum (TS), and one with presentation of a novel object after sham surgery. With a separate group of mice, dopamine activity in TS was recorded during novelty exploration.

README: Dopamine activity in the tail of the striatum, DeepLabCut and MoSeq during novel object exploration

https://doi.org/10.5061/dryad.41ns1rnh2

There are data for 5 animal groups. Two groups of mice were presented with stimulus novelty or contextual novelty. Different two groups of mice were presented with stimulus novelty and dopamine neurons that project to the tail of the striatum (TS) were ablated or sham surgery was operated. A different group of mice were presented with stimulus novelty and dopamine activity in TS was recorded. All mice were recorded in videos and location of body parts was tracked with DeepLabCut. Some videos were also analyzed with Moseq.

Description of the data and file structure

Data structure

DLC (DeepLabCut) data is stored in .mat files. Given .mat files were named based on the following file structure:

 

.

+-- group1_name

|   +-- animal1_name

|       +-- session1_name

|           -- DLClabel.mat

|       +-- session2_name

|       +-- session3_name

|   +-- animal2_name

|   +-- animal3_name

|   -- bout.mat

|   -- bout_multi.mat

+-- group2_name

+-- group3_name

-- akiti_miceID_231129.xlsx

 

There is a separate .mat file for each session. DLClabel.mat contains the variable "Labels" which has the following format:

 

Labels(:,2) Nose x (pixel)

Labels(:,3) Nose y (pixel)

Labels(:,5) Leftear x (pixel)

Labels(:,6) Leftear y (pixel)

Labels(:,8) Rightear x (pixel)

Labels(:,9) Rightear y (pixel)

Labels(:,11) Tailbase x (pixel)

Labels(:,12) Tailbase y (pixel)

Labels(:,14) Tailmidpoint x (pixel)

Labels(:,15) Tailmidpoint y (pixel)

Labels(:,17) Tailtip x (pixel)

Labels(:,18) Tailtip y (pixel)

Labels(:,20) Head x (pixel) 'average of nose, leftear and rightear'

Labels(:,21) Head y (pixel)

Labels(:,22) Body x (pixel) 'average of head and tail base'

Labels(:,23) Body y (pixel)

Labels(:,24) Tail x (pixel) 'average of tailtip, midpoint and base'

Labels(:,25) Tail y (pixel)

Labels(:,26) head speed (pixel)

Labels(:,27) head accerelation (pixel)

Labels(:,28) head jerk (pixel)

Labels(:,29) body speed (pixel)

Labels(:,30) body accerelation (pixel)

Labels(:,31) body jerk (pixel)

Labels(:,32) nose distance from object (pixel)

Labels(:,33) head distance from object (pixel)

Labels(:,34) tailbase distance from object (pixel)

Labels(:,35) body length (pixel)

Labels(:,36) head speed related to object (pixel)

Labels(:,37) head speed unrelated to object (pixel)

Labels(:,38) tail-base from wall (pixel)

 

MoSeq data is stored in MoSeq_MiceIndex_wLabels_combine3L.mat. This file contains the variable "Mice" which has the following format:

 

Mice.name: mouse name

Mice.novelty: stimulus novelty ‘S’ or contextual novelty ‘C’

Mice.ExpDay: experiment date

 

 

Photometry data (Dopamine sensor in the tail of the striatum, TS) is stored in files named *_approach_start.mat, *_retreat.mat, and *_retreat_end.mat (3 files for the first day of novelty for each animal).

 

Original videos are stored in files named ‘_rgb.mp4’.

 

Information about each mouse is stored in akiti_miceID_231129.xlsx.

 

There are two types of sessions:

·      "novel"1, 2, 3, ... : an object was presented. 1, 2, 3 indicates day. For example, novel1 means that the data was obtained on the first day of object presentation. A same object was presented on following days.

·      "hab"1, 2 : habituation sessions with no object, before novelty day 1.

 

There are 5 groups of mice:

1.     "FP_all": recording of dopamine sensor signals in TS on novelty day 1

2.     "stimulus": a novel object was presented (a different object was presented before habituation day 1 in a home cage)

3.     "contextual": an unexpected familiar object was presented (a same object was presented before habituation day 1 in a home cage)

4.     “saline”: sham surgery (vehicle injection), a novel object was presented

5.     “6OHDA”: TS-projecting dopamine neurons were ablated, a novel object was presented

 

MoSeq data was obtained using data of groups 2 to 5 on novelty day 1.Sharing/Access information

Code/Software

Matlab code files using this dataset are available on GitHub (https://github.com/ckakiti/Novelty_paper_2021).

Methods

Novelty testing sessions consisted of animals exploring a single novel object within the behavioral arena. Object was placed in the corner of a behavioral arena (taped to floor to prevent animal from moving it, ~12-15cm from either wall). Sessions lasted for 25 minutes per animal per day for 4-12 days, and mice were run in the same order as habituation each day. One object was used per animal for duration of experiment and the objects were not shared between animals. Before each session, object would be submerged in soiled bedding (mixture of bedding from each mouse’s cage in current round, 6 animals) and wiped off with dry kimwipe to remove excess bedding dust. Objects were wiped with ethanol after each day and allowed to air out overnight before use.

For body part tracking, we used DeepLabCut version 1.0 (Mathis et al., 2018). Separate networks were used for different experimental settings: namely for mice without fiber implants (network A) and mice with fiber implants (network B). Both networks were run using a ResNet-50-based neural network (He et al., 2016; Insafutdinov et al., 2016) with default parameters for 1,030,000 training iterations. We provided manually labeled locations of four mouse body parts within video frames for training: nose, left ear base, right ear base, and tail base. For network A: We labeled 1760 frames taken from 64 videos. For network B: We labeled 540 frames taken from 17 videos. For both networks, 95% of labeled frames were then used for training.

Raw imaging data was collected from the depth camera, pre-processed (filtered, background subtracted, and parallax corrected), and submitted to a machine learning algorithm that evaluates the pose dynamics over time (Wiltschko et al., 2015). During video extraction (moseq2-extract), 900 frames were trimmed from the beginning of the video to correct for time between when video was started and when the mouse was placed in arena. During model learning (moseq2-model), a hyperparameter was set to the total number of frames in the training set (kappa=2,711,134, 52 sessions, 52 animals). This exceeds the recommended >=1 million frames (at 30 frames per second) needed to ensure quality MoSeq modeling.

DA sensor (green) and tdTomato (red) signals were collected as voltage measurements from current pre-amplifiers (SR570, Stanford Research Systems, CA). Green and red signals were cleaned by removing 60 Hz noise with bandstop FIR filter 58-62 Hz and smoothing with a moving average of signals in 50 ms. The global change within a session was normalized using a moving median of 100 s. Then, the correlation between green and red signals was examined by linear regression. If the correlation was significant (p<0.05), the fitted red signals were subtracted from green signals. Responses aligned at a behavioral event were calculated by subtracting the average baseline activity (-3s to -1s before the event) from the average activity of the target window (0-1s after the event).

Funding

National Institute of Neurological Disorders and Stroke, Award: U19NS113201

National Institute of Neurological Disorders and Stroke, Award: R01NS108740

National Institute of Mental Health, Award: R01MH125162

Simons Foundation, Award: Simons Collaboration on the Global Brain

Bipolar Disorder Seed Grant Program

Japan Society for the Promotion of Science

Harvard Molecules, Cells and Organisms