Data from: A model of marmoset monkey vocal turn-taking
Data files
Jun 06, 2024 version files 1.69 MB
Abstract
Vocal turn-taking has been described in a diversity of species. Yet a model that captures the various processes underlying this social behavior across species has not been developed. To this end, here we recorded a large and diverse dataset of marmoset monkey vocal behavior in social contexts comprising one, two and three callers and developed a model to determine the keystone factors that affect the dynamics of these natural communicative interactions. While a coupled oscillator model failed to account for turn-taking in marmosets, our model alternatively revealed four key factors that encapsulate much of patterns evident in the behavior, ranging from internal processes, such as the state of the individual, to social context driven suppression of calling. In addition, we show that the same key factors apply to the meerkat, a carnivorous species, in a multicaller setting. These findings indicate that vocal turn-taking is affected by a broader suite of mechanisms than previously considered and our model provides a predictive framework with which to further explicate this natural behavior and for direct comparisons with the analogous behavior in other species.
README: Data from: A model of vocal turn-taking
https://doi.org/10.5061/dryad.9ghx3ffpx
Description of the data and file structure
Main data files are divided across 3 folders, each folder contains files for one context (1, 2 or 3 animals). Each file contains a 1 x n_animals cell array, where each cell contains a list of individual animals' calls (start time, end time). The file name (in the case of 2 monkey, the folder name) indicates the names of the recorded animals and the recording date (in YYYYMMDD).
A last folder contains collated metadata, including the animal's names, sex, age, cage mate status and recording details:
familial_connection
This contains two arrays (sheets) with each row and each column being an animal ID. In the first sheet, if the animal in the row is the parent of the column animal, this is indicated with a TRUE value, otherwise the value is FALSE. In the second sheet, if the animal in the row is the sibling of the column animal, this is indicated with a TRUE value, otherwise the value is FALSE.
fps
This text file contains the downsampled fps at which all recordings were taken.
monkey_data_per_session_v2
This file shows for each of the two monkey recording sessions the age of each monkey in days, whether they were cagemates (cm), whether they were bonded partners (partners), whether they were a parent/child pair (parent-child), whether the animals were siblings (sibling), whether animal 1 was the parent of animal 2 (is_parent), and the time of day the recordings was taken, rounded to the nearest hour, in 24h notation.
Monkey_list
This file contains a list of monkey names and their index in the code. This file can be used to crossreference data from the file names and the other metadata files.
Monkey_sex
This file contains the monkey indices and the sex of the monkey (0=Female, 1=Male).
'Fig_2_features.mat' ('Fig_2_features.csv')
This file contains the features included in the model for each call across all 2 animal recordings. Note that not all calls have a valid value for all features. Specifically, the first call of each session will be missing a value for whether it was a response or not, as this could not be determined (a call may have been made prior to the start of the session). If no prior calls were made in a conversation, the monkey ICI and/or target ICI will be missing. Lastly, if the end frequency of a call was the same as the maximum frequency, no value for slope could be calculated. All missing values are indicated as NaNs.
Code/Software
Two sets of code are included with the data. First, the MATLAB (using MATLAB 2020a) code to perform all analysis described in the manuscript, ordered by figure. Next, a set of R (v4.3.1) code to make the particular figures based on the extracted data.
Methods
Data was collected using multiple directional microphones. For each context (1 animal, 2 animals, 3 animals), the call times were extracted from the raw audio files and collated into a single cell array.
For the models in Fig 2, additionally the spectral features were extracted for each call and collated across calls in one .mat file.