Data from: Objective autonomic signatures of tinnitus and sound sensitivity disorders
Data files
Apr 10, 2025 version files 131.51 KB
-
indiv_values.xlsx
27.18 KB
-
README.md
5.12 KB
-
ts.mat
68.59 KB
-
valence_rank_ordered_data.mat
30.61 KB
Abstract
Hypersensitivity, phantom percepts, and sensory reactivity are core features of many neurological disorders. Direct, objective measurements of these features have proven difficult to identify, leaving subjective questionnaires as the primary means of assessing sensory disorder severity. Here, we studied neurotypical adults (n = 50) or adults with a combination of sound sensitivity and tinnitus (ringing of the ears; n = 47) and identified a new class of objective measurement that predicted individual differences in the Tinnitus Handicap Inventory (THI) and Hyperacusis Questionnaire (HQ). A neurophysiological assessment of central auditory gain demonstrated an elevation in participants with tinnitus and sound sensitivity but no association with symptom severity. Instead, accurate predictors of individual THI and HQ scores were identified in pupil dilations and facial movements elicited by emotionally evocative sounds. These findings highlight autonomic signatures of disrupted affective sound processing in persons with tinnitus and sound sensitivity disorders and introduce new approaches for their objective measurement.
Submitted data were collected for the following study:
Smith, S. S., Jahn, K. N., Sugai, J. A., Hancock, K. E., & Polley, D. B. (2025). Objective Autonomic Signatures of Tinnitus and Sound Sensitivity Disorders.
Descriptions
indiv_values.xlsx
Each row lists study measures for each participant, with columns as follows:
- Group: Grouping of study participant as neurotypical (i.e., did not report sound sensitivity, nor intermittent or chronic tinnitus; NT) or disordered hearing (based on clinical evaluation of chronic tinnitus or abnormal sound sensitivity; DH).
- Tinnitus: Determination as to whether study participant had chronic tinnitus (yes/no).
- Age: Age of study participant (years).
- HQ: Score on Hyperacusis Questionnaire (integer score).
- THI: Score on Tinnitus Handicap Inventory (integer score).
- AuditoryGain(nV/dB): Mean of the numerical derivative for each EEG growth function between 40 and 65 dB SL (nV/dB).
- IADS_Valence: Mean behavioral valence rating for audio clips from the International Affective Digitized Sounds (IADS) digital library (scale between 1 [positive valence bound] and 9 [negative valence bound]).
- IADS_Pupil(z-sc): Mean evoked pupil response for IADS sounds (z-score).
- IADS_SkinCond(iSCR): Mean integrated skin conductance response for IADS sounds (μs).
- IADS_Face(au): Mean facial movement in response to IADS sounds (arbitrary unit).
- IADS_PupilBaseline(%re_con_max): Mean pupil size in the baseline period of the IADS sounds (% fold change relative to the pupil at its most constricted).
- Con_max(au): Size of constricted pupil when the dynamic light stimulus was at its brightest (arbitrary unit native to Eyelink system).
- Light_range(au): Range of the pupil in response to a dynamics light stimulus (arbitrary unit native to Eyelink system).
- Digits_Behav9dB(%): Mean accuracy for 9 dB SNR trials on multi-talker digits task (%).
- Digits_Behav0dB(%): Mean accuracy for 0 dB SNR trials on multi-talker digits task (%).
- Digits_Pupil9dB(z-sc): Mean evoked pupil size on 9 dB SNR trials on multi-talker digits task (z-score).
- Digits_Pupil0dB(z-sc): Mean evoked pupil size on 0 dB SNR trials on multi-talker digits task (z-score).
- TinnitusPitch: Self-reported tinnitus pitch.
- TinnitusSoundDescriptor: Self-reported description of tinnitus sound.
- TinnitusSomatic: Whether tinnitus could be modulated during at least one of the six somatic maneuvers (yes/no).
Missing data is coded as n/a and detailed explanations are provided in the manuscript. A few participants had missing or incomplete behavioral data (missing HQ, n=2; missing valence assessment, n=1; missing digit recognition, n=1). We found that electroencephalography (EEG) data (n=4) and skin conductance recordings (n=41) were not usable on account of poor electrode contact. We were unable to use pupil data (IADS, n=2; Digits, n=4; Light response, n=11) and facial movement (n=4) due to inordinately high rates of blinking or failed tracking.
valence_rank_ordered_data.mat
Each participant’s valence ratings were used to rank-order their individual physiological and behavioral responses. Data is stored in a 3-dimensional MATLAB array (97x60x4) as followed:
- Dimension 1: Indexes the 97 study participants.
- Dimension 2: Indexes the 60 rank-ordered ratings, from most positive valence through to most negative.
- Dimension 3: Rank-ordered outcome measures where index 1 is valence rating (scale between 1 [positive valence bound] and 9 [negative valence bound]), index 2 is evoked pupil response (z-score), index 3 is integrated skin conductance response (μs), and index 4 is facial movement (a.u.).
ts.mat
Mean and standard error values to replot manuscript figures. Within the structure there are five fields: .aud_gain (EFR growth as a function of sound level), .pupil (sound-evoked pupil dilations), .scr (sound-evoked skin conductance), .light (pupillary light reflex), and .face (sound-evoked facial movement). Within each field, .x are plotting values along the abscissa, .y1 and .y2 are mean plotting values along the ordinate, and .y1_se and .y2_se are the standard error plotting values along the ordinate.
Sharing/Access information
Links to other publicly accessible locations of the data:
References to questionnaire instruments:
- C. W. Newman, G. P. Jacobson, J. B. Spitzer, Development of the tinnitus handicap inventory. Arch Otolaryngol Head Neck Surg 122, 143–148 (1996).
- S. Khalfa, S. Dubal, E. Veuillet, F. Perez-Diaz, R. Jouvent, L. Collet, Psychometric normalization of a hyperacusis questionnaire. Orl 64, 436–442 (2002).
Human subjects data
All participants provided their written informed consent to participate in the study and have agreed to allow their de-identified health information to be shared.