Skip to main content

Data from: The breath shape controls intonation of mouse vocalizations

Cite this dataset

MacDonald, Alastair; Hebling, Alina; Wei, Xin Paul; Yackle, Kevin (2024). Data from: The breath shape controls intonation of mouse vocalizations [Dataset]. Dryad.


Intonation in speech is the control of vocal pitch to layer expressive meaning to communication, like increasing pitch to indicate a question. Also, stereotyped patterns of pitch are used to create distinct sounds with different denotations, like in tonal languages and, perhaps, the ten sounds in the murine lexicon. A basic tone is created by exhalation through a constricted laryngeal voice box, and it is thought that more complex utterances are produced solely by dynamic changes in laryngeal tension. But perhaps, the shifting pitch also results from altering the swiftness of exhalation. Consistent with the latter model, we describe that intonation in most vocalization types follows deviations in exhalation that appear to be generated by the re-activation of the cardinal breathing muscle for inspiration. We also show that the brainstem vocalization central pattern generator, the iRO, can create this breath pattern. Consequently, ectopic activation of the iRO not only induces phonation, but also the pitch patterns that compose most of the vocalizations in the murine lexicon. These results reveal a novel brainstem mechanism for intonation.

README: The breath shape controls intonation of mouse vocalizations

The contents of the data sets include the data files for each experiment (airflow trace, sounds file, opto pulse timestmaps, and EMG).

Description of the data and file structure

Airflow for baseline vocalization recordings:   The plethysmography chamber was modified to accommodate a microphone to record vocalizations (CM16/CMPA, Avisoft Bioacoustics) and the airflow in the chamber was measured by a spirometer (FE141, AD instruments). Both data streams were acquired through a DAQ board (PCI-6251, National Instruments) and written to disk for offline analysis. Sound was acquired at 400 kHz and airflow at 1 kHz.

Files are annotated with the date_cage number_animal ID_trial number

.wav file – waveform audio file of the sound spectrogram

.txt file – breathing airflow sampled at 1 kHz.

--> When these recordings are combined with optogenetics, an additional .txt file is included _opto that shows when the light signal is provided (values > 1).

--> When recordings are combined with EMGs, the files are annotated with the channel used for acquisition and the corresponding muscle.

 _ch1_larynx_ch2_dia indicates that the channel 1 recordings is of the laryngeal muscle and channel 2 is the diaphragm muscle.

--> Empty cells have 'null' values.

Sharing/Access information

Please contact Kevin Yackle if more information is needed to analyze the data.


The code for analysis is available on the Yackle Lab Github


National Institute of Neurological Disorders and Stroke, Award: R01NS126400

National Institutes of Health, Award: R34NS127104, BRAIN initiative

Simons Foundation

Esther A. & Joseph Klingenstein Fund