Data from: Establishing a predictable cue for catches to reduce reactivity to management events for captive rhesus macaques (Macaca mulatta)
Data files
Mar 10, 2025 version files 93.60 KB
Abstract
Psychological duress can emerge from the perceived lack of predictability such that, in captive circumstances, reliable signals for aversive events can afford animals with the opportunity to behaviorally and physiologically prepare. Does a reliable and unique signal cue for an aversive management event reduce reactivity to management events that share unreliable cues? We recorded animal responses to management events near, or involving, outdoor-housed rhesus macaques (Macaca mulatta) in two large mixed-sex groups, with experimental periods that introduced a signal coupled to catch events. Management events varied in the severity and magnitude of animal responses. Our results validated that catches were more disruptive than management events that indirectly involved animal subjects, yet were comparable to management events involving direct interactions. Signal use reduced aversive responses to more routine management events that shared unreliable cues with catches. Due to the abundance of these routine events, we assert that the value of change with the implementation of the signal provided a detectable improvement across multiple measures of disruption.
https://doi.org/10.5061/dryad.h18931zw8
https://doi.org/10.1016/j.applanim.2025.106578
Alexander J. Pritchard 1,2*, Rosemary A. Blersch 1,2, Amy C. Nathman 1, Eli R. DeBruyn 1, Julia A. Salamango 1, Emily M. Dura 1, Brianne A. Beisner 1, Jessica J. Vandeleest 1,2, Brenda McCowan 1,2
1 Neuroscience and Behavior Unit, California National Primate Research Center, University of California Davis, Davis, CA, USA
2 Department of Population Health & Reproduction, School of Veterinary Medicine, University of California Davis, Davis, CA, USA
*Corresponding author
E-mail: ajpritchard@ucdavis.edu (AJP)
Description of the data and file structure
Data are in two .RData files: 'StudyI.RData' and 'StudyII.RData'. Each of these files contain a dataframe (D_Small for the 2023 Study I and D_Small24 for Study II).
Within these *.Rdata files, the two dataframes are of the same basic structure of 17 columns. Within each of the dataframes the columns are named using the same convention with one noted except:
"Response" = response ratings ranging from 1-5
"React" = reactivity scores ranging from 0-4. These are a composite score summed from 'Vertical.Space', 'Coo', 'Alarm.Bark', 'Monkeys.Watching'
"Density.Front" = a scored ranking of front enclosure density from 3-1
"Movement" = movement scores of four bins ascending in intervals of 25%
"Ev_Dur.n" = a 30 second interval label from 1-10, with 1 being the first interval of an event (first 30 seconds) and 10 being the 10th (five minutes)
"Tech.Approach.Cage" = did a technician approach the cage in an interval? (0 = No, 1 = Yes)
"Tech.Entered.Cage" = did a technician enter the cage in an interval? (0 = No, 1 = Yes)
"Signal" = did a signal occur? (0 = No, 1 = Yes)
"Animal.Detect" = did the animals detect a management event before the observers? (0 = No, 1 = Yes)
"Disrup_Type" [Study I] OR "Disrup24_Type" [Study II] = label of what kind of management event the interval was assigned to
"Period" = label of what project period each interval occurred in (Baseline, Experiment, Followup). In StudyII the experimental period was divided into two phases (labelled here as Treatment_A and Treatment_B)
"Date" = calendar date of data recording
"Hour" = hour of data recording
"Vertical.Space" = were the majority of animals using vertical space? (0 = No, 1 = Yes)
"Coo" = did any animal coo? (0 = No, 1 = Yes)
"Alarm.Bark" = did any animal alarm bark? (0 = No, 1 = Yes)
"Monkey.Watching" = were at least 5 animals watching the event for >/= 3 seconds? (0 = No, 1 = Yes)
Missing data code = NA
Code/Software
Code is annotated and provides the necessary analyses to replicate those reported in the associated manuscript. Code was run in R (v4.2.2) using the RStudio GUI. For analyses and visualization, the following libraries are needed: brms, bayesplot, emmeans, ggplot2, ggpubr, GGally. All of these libraries are available through CRAN.
There is one analytical file ('*Disruption Data Analyses.R' *). Executing all analyses will be time-consuming and computationally intensive. The final models used in the manuscript include:
Study I
Rs_Mod (response rating model)
M_Mod (movement score model)
DFe_Mod (density score model)
Re_Mod (reactivity score model)
Study II
Rs_Mod24 (response rating model)
M_Mod24 (movement score model)
DF_Mod24 (density score model)
Re_Mod24 (reactivity score model)
The reported* post-hoc* comparisons are achieved through the* hypothesis() *and emmeans() functions.
Methods
Data consist of group-level behavioral responses to management events. Some simple data cleaning has occurred (i.e., Disrup_Type was assigned after data collection but prior to analysis based on observer recorded dichotomous labels) - see manuscript for description.
