Data from: Crows ‘count’ the number of self-generated vocalizations
Data files
Apr 25, 2024 version files 361.77 KB
-
CrowsCountNumberofSelfGeneratedVocalizations_Dataset.xlsx
-
README.md
Abstract
Producing a specific number of vocalizations with purpose requires a sophisticated combination of numerical abilities and vocal control. Whether any animal possesses such a capacity to voluntarily control the number of self-generated vocalizations is yet unknown. We demonstrate that crows can flexibly produce a variable number of one to four vocalizations in response to arbitrary cues associated with numerical values. The acoustic features of the first vocalization of a sequence were predictive of the total number of vocalizations, indicating a planning process. Moreover, the acoustic features of vocal units were foretelling of their order in the sequence and could be used to read-out counting errors during vocal production. Together, the crows' vocal enumeration capability could be an evolutionary precursor to symbolic counting found uniquely in humans.
README: Crows ‘count’ the number of self-generated vocalizations
https://doi.org/10.5061/dryad.qjq2bvqpz
This dataset contains behavioral data that is presented and analyzed in the corresponding manuscript. We trained three crows (Corvus corone) on a numerical vocal production task where they had to produce a flexible number of 1 to 4 vocalizations in response to arbitrary visual and auditory stimuli. We measured task performance, timing data, and the acoustic features of the vocalizations. Data was collected for 10 sessions per cue modality (i.e. visual and auditory) for a total of 20 sessions per crow.
Description of the data and file structure
Processed data is organized in a single Excel file with sheet names corresponding to relevant figure subplots.
Figure 1C/D: behavioral curves for visual and auditory cues in individual crows - BirdID corresponds to subject identity, Modality corresponds to the type of cue given (1: visual, 2: auditory), and subsequent columns refer to the proportion produced where the first number is the cued number and the second number is the produced number (e.g. 2_3 column refers to when the bird produced 3 vocalizations when cued to produce 2).
Figure 1E/F: group accuracy and width of behavioral curves depending on cue number - BirdID corresponds to subject identity, Modality corresponds to type of cue given (1: visual, 2: auditory), CueNum corresponds to the cued number, accuracy refers to the percent correct for that number and STDev refers to the standard deviation of the responses for that number.
Figure 2B: relevant time intervals - BirdID corresponds to subject identity, Modality corresponds to type of cue given (1: visual, 2: auditory), Session corresponds to the session in which the median of the data is calculated, and subsequent columns refer to the relevant time intervals in seconds (RT: reaction time from onset of cue to onset of first vocalization, 1_2: interval between the first vocalization and the second, 2_3: interval between the second vocalization and the third, 3_4: interval between the third vocalization and the fourth).
Figure 2C: reaction time by cue number - BirdID corresponds to subject identity, Modality corresponds to type of cue given (1: visual, 2: auditory), Session corresponds to the session in which the median of the data is calculated, and subsequent columns refer to reaction time in seconds for each cued number.
Figure 3A-F: data to plot confusion matrices (4x4) for example crow to show classifier performance using acoustic features of the first vocalization in the sequence to predict number of impending vocalizations
Figure 3GH: classifier performance (x1000) for each crow and each condition (unimodal - Vis_Vis, Aud_Aud; crossmodal - Vis_Aud, Aud_Vis)
Figure 3I: proportions of the first vocalization in error trials matching either the cued number (labeled Cue) or the produced number (labeled Prod). BirdID corresponds to subject identity, Modality corresponds to type of cue given (1: visual, 2: auditory)
Figure 4AB: coordinates of mean vocal positions in three-dimensional acoustic space. Cue corresponds to the cued number, Prod corresponds to the produced number, Position corresponds to the ordinal position in the sequence that vocalization is in.
Figure 4CD: classifier performance (x1000) for each crow and each cue modality for differing lengths of vocalizations (2, 3, & 4), using the acoustic features of each vocalization to predict its ordinal position in the sequence.
Figure 4G: differences in the types of classified errors where Etype corresponds to the type of error (1: stutters, 2: skips), Cue corresponds to the cued number, Prod corresponds to the produced number, Perc Diff corresponds to the difference in the error proportion of the data compared to a shuffled control.
SFig1: data to plot classifier performance for each crow as a function of absolute numerical distance from the cued number. Classifiers were trained with acoustic features plus the reaction time.
SFig2: data to plot classifier performance for each crow as a function of absolute numerical distance from the cued number. Classifiers were trained with only the acoustic features of the vocalizations.
SFig3: classifier performance (x1000) for each cue and cue modality tested on other crows
SFig4: data to plot classifier performance in early training sessions for visual stimuli.
SFig5: data to plot confusion matrices for example crow to show classifier performance for each length of trial using acoustic features of each vocalization to predict its ordinal position in the vocal sequence.
SFig6: data to plot proportion of different errors. Cue corresponds to the cued number, Prod corresponds to the produced number, subsequent columns refers the proportions of different types of transitions (Correct progressions and 5 errors).
Code/Software
Data was analyzed using custom written matlab scripts.
CrowsCountNumberofSelfGeneratedVocalizations_MainFigures.m is used to plot figures in the main text.
CrowsCountNumberofSelfGeneratedVocalizations_SuppFigures.m is used to plot figures in the supplementary materials.
For violin plots: Hoffmann H, 2015: violin.m - Simple violin plot using matlab default kernel density estimation. INRES (University of Bonn), Katzenburgweg 5, 53115 Germany. hhoffmann@uni-bonn.de
For shaded error bars: Rob Campbell (2024). raacampbell/shadedErrorBar (https://github.com/raacampbell/shadedErrorBar), GitHub. Abgerufen 12. April 2024.