Neural and visual processing of social gaze cueing in typical and ASD adults
Data files
Jan 23, 2023 version files 707.21 MB
-
AllData.zip.001
104.86 MB
-
AllData.zip.002
104.86 MB
-
AllData.zip.003
104.86 MB
-
AllData.zip.004
104.86 MB
-
AllData.zip.005
104.86 MB
-
AllData.zip.006
104.86 MB
-
AllData.zip.007
78.06 MB
-
README.md
4.81 KB
Abstract
Atypical eye gaze in joint attention is a clinical characteristic of autism spectrum disorder (ASD). Despite this documented symptom, neural processing of joint attention tasks in real-life social interactions is not understood. To address this knowledge gap, functional-near infrared spectroscopy (fNIRS) and eye-tracking data were acquired simultaneously as ASD and typically developed (TD) individuals engaged in a gaze-directed joint attention task with a live human and robot partner. We test the hypothesis that face processing deficits in ASD are greater for interactive faces than for simulated (robot) faces. Consistent with prior findings, neural responses during human gaze cueing modulated by face visual dwell time resulted in increased activity of ventral frontal regions in ASD and dorsal parietal systems in TD participants. Hypoactivity of the right dorsal parietal area during live human gaze cueing was correlated with autism spectrum symptom severity: Brief Observations of Symptoms of Autism (BOSA) scores (r = -0.86). Contrarily, neural activity in response to robot gaze cueing modulated by visual acquisition factors activated dorsal parietal systems in ASD, and this neural activity was not related to autism symptom severity (r = 0.06). These results are consistent with the hypothesis that altered encoding of incoming facial information to the dorsal parietal cortex is specific to live human faces in ASD. These findings open new directions for understanding joint attention difficulties in ASD by providing a connection between superior parietal lobule activity and live interaction with human faces.
Participants. Twenty ASD adults (Mean age 27±5.9 years; 18 right-handed, 2 left-handed (Oldfield, 1971) and 30 typically-developed (TD) adults (Mean age 23±4.4 years; 27 right-handed and 3 left) participated in this study (Table 1). ASD diagnoses were confirmed by gold standard, research-reliable clinician assessments, including the Autism Diagnostic Observation Schedule, 2nd Edition (ADOS-2 (Lord et al., 2012)), Brief Observation of Symptoms of Autism (BOSA (Lord et al., 2020)), and expert clinical judgment using DSM-5 criteria (American Psychiatric Association, 2013). Average ADOS-2 and BOSA Comparison Scores were 7 + 0.26 and 7 + 0.42, respectively. Assessment and diagnostic tests were performed in clinical facilities at the Yale Child Study Center and the Brain Function Laboratory. Participants were age-matched (Table 1) and recruited from ongoing research in the McPartland Lab, the Yale Developmental Disabilities Clinic, and the broader community through flyers and social media announcements. Inclusion criteria included age 18-45 years, IQ≥70, and English speaking. Exclusion criteria were the same as a previous investigation from the lab (Hirsch et al., 2022). All participants provided written and verbal informed consent under guidelines and regulations approved by the Yale University Human Investigation Committee (HIC # 1512016895) and were compensated for their participation. Assessment of the ASD participants’ capacity to give informed consent was provided by clinical research staff who monitored the process and confirmed verbal and non-verbal responses. ASD participants were accompanied by a member of the clinical team, who continuously evaluated their sustained consent to participate. Further information about participant demographics is outlined in Supplementary Methods.
A research investigator was present during data acquisition while monitoring signs of discomfort during the experiment. Each participant was paired with a TD initiator. Two females (22-23 years old throughout data collection) served as human initiators throughout the entire study. Determination of a sample size sufficient for a conventional power of 0.80 is based on contrasts (Real Face > Video Face) observed from a previous similar study (Noah et al., 2020). Using the “pwr” package of R statistical software (Champely et al., 2017), a significance level of p < 0.05, uncorrected, is achieved with n of 16 subjects (for each diagnostic group). This estimate of sample size is consistent with similar calculations based on the signal strength in other relevant peak ROIs. Sample sizes of 20 participants (ASD) and 30 participants (TD) ensured adequate effect sizes. The gender composition of the ASD group is consistent with the estimated 4:1 male:female ratio of ASD diagnosis.
Experimental Design and Statistical Analyses. The experiment consists of two conditions: gaze cueing with a human dyad partner who performs cue initiation and with a robot that has a simplified face that initiates the cue. Maki (HelloRobot, Atlanta, Georgia; (Payne, 2018); (Scassellati, Brawer, et al., 2018) is a robot capable of dynamic eye and head movements and was donated by Yale University’s Department of Computer Science. Participants engaged in a gaze cueing task in which an initiator (robot or human) used eye gaze to direct the participant’s eyes to one of two circular targets located either 13.4° to the left or right on an electronically controlled glass partition. Participants were directed to gather information from the face of the initiator to direct gaze to a specific location. The order of right and left directions was randomized.
Participants were seated at a table across from each other, approximately 140 cm. Between them, on the table, was the electronically controlled glass partition, or Smart Glass, that changed between transparent and opaque states (Figure 1A). The Smart Glass was pre-programmed to be transparent during the task blocks and opaque during rest periods. A scene camera was positioned on a camera mount attached to an articulated arm behind each participant and was aimed to record the participant's view during the experiment. A human initiator received a visual cue (through a small screen on the other side of the glass not visible to the participant) during the rest periods. The robot initiator was electronically programmed to make movements directed to the target and to blink randomly. Once the Smart Glass changed from opaque to transparent, the initiator looked at the participant’s eyes for 2 seconds and then averted gaze to a target (left or right) on the glass partition for 2 seconds (Figure 1B). The participant’s task was to follow the initiator’s gaze. The paradigm consisted of 3 gaze cueing events followed by 15 seconds of rest (opaque Smart Glass) for a total of 3 minutes per run (Figure 1C). Eye-tracking was used to confirm participant compliance for each trial.
Eye-Tracking. Eye-tracking data for each participant was measured using a Tobii Pro x3-120 eye tracker (Tobii Pro, Stockholm, Sweden) at a sampling rate of 120 Hz, mounted on the experimental apparatus facing the participant. A three-point calibration method was utilized to calibrate the eye tracker on each participant before experimental recording. The initiator looked straight ahead while the participant was instructed to look at the eyes of the initiator, and then at three dot positions in front of the initiator’s face. The same calibration procedure for robot interactions was performed. Similar “live calibration” procedures have been used successfully in prior investigations of in-person social attention (Dravida et al., 2020; Falck-Ytter, 2015; Noah et al., 2020; Thorup et al., 2016). Participants interchanged their gaze between ≈0° and +13.4° of deflection as instructed for the gaze cueing task. The eye contact portions of the task were 4 seconds in length, with 3 per trial, for 21 seconds of expected eye contact over the trial duration.
Functional NIRS Signal Acquisition, Channel Localization, and Signal Processing. Similar methods have been previously described in other studies in our lab (Hirsch et al., 2022; Kelley et al., 2021; Noah et al., 2020) and detailed methods are described in Supplementary Methods. The specific layout with the coverage of the optode channels is shown in Figure 1D.
Eye-tracking Analysis. Eye-tracking data were exported from the Tobii system to a custom data processing pipeline in MATLAB (Mathworks, Natick, MA). The MATLAB data processing pipeline calculated eye contact events, accuracy, and pupil diameter. One out of the 30 TD participants and two out of the 20 ASD participants did not provide usable eye-tracking data due to calibration errors. Tobii Pro Lab software (Tobii Pro, Stockholm, Sweden) was used. For each run and each participant, a face box was manually defined for both human and robot gaze cueing conditions. For the visual sensing measures of gaze duration (Dwell Time) and Gaze Variability, the horizontal components of gaze trajectories from the gaze cue portions of each run were analyzed, focusing on the samples within the face box range. Dwell Time was determined by the number of valid retained samples per interval normalized by the sampling rate (seconds). Gaze Variability was computed as the standard deviation of the sample durations centered within the eye box.
Participant Compliance. We asked our participants to follow the eyes of the initiator during the cued 4-seconds and compliance was measured by eye-tracking. For the TD group, eye-following accuracy was 100% in both the robot and human gaze cueing conditions. For the ASD group, eye-following accuracy was 100% in the robot condition and 99.4% for the human gaze cueing condition (Supplementary Figure 1).
All data files have been exported into .csv files.