Skip to main content
Dryad

Geographic variation in the matching between call characteristics and tympanic sensitivity in the Weeping lizard

Cite this dataset

Labra, Antonieta et al. (2022). Geographic variation in the matching between call characteristics and tympanic sensitivity in the Weeping lizard [Dataset]. Dryad. https://doi.org/10.5061/dryad.mw6m905z2

Abstract

Effective communication requires a match among signal characteristics, environmental conditions, and receptor tuning and decoding. The degree of matching, however, can vary, among others due to different selective pressures affecting the communication components. For evolutionary novelties, strong selective pressures are likely to act upon the signal and receptor to promote a tight match among them. We test this prediction by exploring the coupling between the acoustic signals and auditory sensitivity in Liolaemus chiliensis, the Weeping lizard, the only one of more than 285 Liolaemus species that vocalizes. Individuals emit distress calls that convey information of predation risk to conspecifics, which may respond with antipredator behaviors upon hearing calls. Specifically, we explored the match between spectral characteristics of the distress calls and the tympanic sensitivities of two populations separated by more than 700 km, for which previous data suggested variation in their distress calls. We found that populations differed in signal and receptor characteristics and that this signal variation was explained by population differences in body size. No precise match occurred between the communication components studied, and populations differed in the degree of such correspondence. We suggest that this difference in matching between populations relates to evolutionary processes affecting the Weeping lizard distress calls.

Methods

Calls: We recorded the vocalizations between 11:00 and 16:00 h in a sound-attenuated booth in which walls and the ceiling were covered with 50-cm-height foam wedges. Before a recording, and to avoid variations in body temperature that could affect sound production, lizards were exposed to a heat source to allow them to thermoregulate and achieve the species' preferred value. After vocal recordings, we measured the cloacal temperatures and excluded vocalizations from individuals with temperatures beyond 35 ± 2°C (mean ± SE). Additional recordings from a given individual were obtained after a minimum of 48 h. We evoked distress calls by gently grasping the lizard with the thumb and forefinger, and softly touching its snout with a finger for two minutes. The lizard was positioned 10 cm in front of a directional microphone (Sennheiser ME 66; frequency response: 40 Hz-22 kHz) connected to a digital recorder (Tascam DR-100). For the southern recordings, we also obtained the sound levels (in dB SPL) by positioning at 10 cm in front of the focal lizard, a precision integrating sound level meter (Brüel & Kjær 2230), previously calibrated with a sound level calibrator (Brüel & Kjær 4230); the SPL values were dictated to the recorder. For each individual, we averaged all its recorded sound levels independently of the emitted call type (see below). The generated .WAV files (44.1 kHz, 16 bits) were high-pass filtered (cutoff: 200 Hz) and analyzed using Raven Pro 1.3 (Cornell Laboratory of Ornithology, Ithaca, NY).

We identified two types of distress calls: (i) harmonic: calls with a complete or partial clear harmonic structure, and (ii) noisy or non-harmonic: calls not having any clear harmonic structure. We further classified harmonic calls as simple or complex, based on the absence or presence of nonlinear phenomena, respectively. We measured the duration (ms) of all call types, and for the harmonic calls, we also determined the number of harmonics recognizable in the spectrograms (fast Fourier transform length = 1024, Hamming Window = 87.5% overlap, resolution: frequency = 488 Hz; time = 0.256 ms), as this variable may modulate the responses to distress calls and might help to discriminate between populations. In addition, from the oscillograms, we also obtained the time to the maximum amplitude (ms) measured from the start of the call, while from the fast Fourier transform, we obtained the fundamental and dominant frequencies. These frequencies, and the number of harmonics, were measured in a segment free from nonlinear phenomena, preferably at the beginning of the signal. Although calls of this lizard species contain ultrasonic components, we did not detect them, as our microphone was nominally sensitive up to 22 kHz. However, since the energy in these calls decreases gradually toward the higher frequencies without energy gaps, we considered that calls with frequencies between 20 to 22 kHz contained ultrasonic components, which provides an estimate of the occurrence of ultrasound in these calls.

Tympanic sensitivities the focal lizard was lightly anesthetized (i.e., motionless, but with normal lung respiration) via an intramuscular injection of Virbac Zoletil® 50 (0.4 µl/g body mass) in a forearm. This dosage was typically effective for 2-3 h, though some individuals required an additional half dose to complete the recordings. Experiments were done in the sound-attenuated booth previously described, where the anesthetized lizard was placed on a temperature-controlled (~ 35°C) thermal plate (ReptiTherm®) located on an anti-vibration table (TMC 63-500). The response of the left eardrum, or tympanic membrane, to various acoustic stimuli, was measured with a laser Doppler vibrometer (Polytec CLV-2534). The compact sensor head of the laser was positioned 30 cm from the lizard’s eardrum, and the laser beam was aimed perpendicular to the tympanic surface, aimed at the tip of the extracolumellar attachment, close to the center of the eardrum. We enhanced beam reflection by placing a ~1 mm2 flake of highly-reflecting white correction tape at the target point of the laser beam with the aid of a binocular light microscope (PZO OP-1, PZO, Warsaw, Poland). The vibrometer sensitivity was set at 5 mm/s, and the incoming signal was amplified by 20 dB with a custom-made amplifier. Automated custom software recorded the vibrometer output signal and controlled stimulus generation and production. For this, we used a data acquisition card (National Instruments NI-6071E), a programmable attenuator (PA5, System 3, Tucker-Davis Technologies, Alachua, FL, USA), and an amplifier (SKP Pro Audio Max 710X). Acoustic stimuli were broadcast for frequencies up to 20 kHz and above this limit, using an audio loudspeaker (Dynaudio BM 6, Skanderborg, Denmark) and an ultrasonic loudspeaker (Fostex Company, Tokyo, Japan), respectively, both placed at 50 cm in front of the focal lizard. We measured the response to ultrasonic frequencies to explore whether these frequencies would be involved in the species communication.

Before the recordings, we calibrated the sound pressure using an ultrasonic ¼” free-field microphone (GRAS 40BE) powered by a preamplifier (GRAS 26CB), placed above the head of a realistic silicone lizard model (~ 5 cm), positioned where the focal lizard would be placed later. The GRAS microphone was calibrated within the audible frequency range with a sound level meter (Brüel & Kjær 2238) by broadcasting pure tones of the same frequencies that were used later in the trials. The microphone output was stored, and the SPL obtained was used to automatically adjust the programmable attenuator to the SPL to be used during the recordings. Stimulus generation and signal acquisition were performed at a 200 kHz-sample rate using 16-bit resolution.

All lizards were exposed individually to stimuli consisting of pure tones and synthetic distress calls of each population; for logistic reasons, however, only a subset of eight adults (4 ♀, 4 ♂; SVL 86.61 ± 2.59 mm) from the central population was analyzed for the response to distress calls. We synthesized tones with a custom program, and their duration was 100 ms, with rise and fall ramps of 10 ms.  A sequence of tones was presented, starting at 0.1 kHz, and in frequency steps of 0.2 kHz from 0.2 to 9.0 kHz. Then, from 9.0 to 20 kHz and 20 to 40 kHz the frequency steps were 0.50 and 2 kHz, respectively. After each tone, there was a period of silence of the same duration as the tone. We controlled the intrapopulation variation in the call characteristics by creating one call for each population using Adobe Audition 3.0 (Adobe System Inc.), based on the average spectro-temporal characteristics of each population harmonic calls. The synthetic calls had a downward frequency modulation pattern, the most frequently found in these populations (see Results). The values of the variables for the call of the central and southern population are, respectively: number of harmonics: six and three, duration: 71 and 42 ms, time to maximum amplitude: 26 and 19 ms, fundamental frequency (which was also the dominant frequency): 2.7 and 6.3 kHz, and a downward sweep from 2.7 to 2.1 kHz and from 6.3 to 5.6 kHz.

Acoustic stimuli were broadcast at 55, 60, 70, and 80 dB SPL. The order of presentation of the stimulus types and the sound levels followed a counterbalanced design to avoid potential effects of order presentation. The signal output of the laser was obtained simultaneously with the stimulus presentation. The acquisition window included the stimulus and its silence interval. For each acoustic stimulus, we recorded 20 response replicates.

The acquired signals were analyzed with a custom-made script in the R environment, using the Seewave package. For each of the 20 response replicates by tone, we obtained the RMS (root-mean-square) of a segment of 80 ms in the middle of the stimulus and in the silence period. We determined the ratio between these RMS values, discarding the values in the first quartile, i.e., those with the lowest signal-to-noise ratio, to reduce the noise and obtain better responses. The remaining replicates were averaged for further analysis. A fast Fourier transform (window length = 8192 points; frequency resolution = 24.41 Hz) was applied at the mid-point of the average response to obtain the vibration velocity of the eardrum. Subsequently, we used these measurements to get the velocity transfer function for the different frequencies and sound levels. From these curves, we obtained the maximum velocity and the frequency at which it was measured, i.e., the best frequency. Additionally, we characterized the sensitivity of the tympanic response by considering the: (1) sensitivity range: the frequency range over which the eardrum vibrated at least at half of the velocity recorded at the best frequency, and (2) the lower and upper-frequency limits of this range.

To analyze the matching between signals and tympanic sensitivities, we recorded the tympanic response to the synthetic distress calls, obtaining the RMS of 20 replicates by call. The values that fell in the first quartile were discarded, and the remaining values were averaged for further analyses. In contrast to the tone analyses, in this case, we used the RMS of the whole stimulus because it showed different temporal characteristics. Mean power spectra of the synthetic distress calls and the tympanic response were obtained with a fast Fourier transform (window length = 2048 points; frequency resolution = 97.66 Hz). This lower frequency resolution, as compared to the one used in the tone analyses, allowed smoother spectra. Finally, we estimated the matching between the spectra of the synthetic distress calls and the tympanic sensitivities, following a method similar to the one used by Moreno-Gómez et al. (2013), acquiring the spectral cross-correlations at zero-lag between these spectra using the function “ccf” from the R environment.

Funding

Agencia Nacional de Investigación y Desarrollo, Award: 1090251

Agencia Nacional de Investigación y Desarrollo, Award: 1120181