Data from: Investigating face processing in online interactions via UK-US Hyperscanning using fNIRS.
Data files
Jan 12, 2026 version files 2.31 GB
-
FileList.txt
295 B
-
README.md
14.19 KB
-
RunOrder.txt
645 B
-
UCL.tar
1.71 GB
-
Yale.tar
603 MB
Abstract
Videoconferencing technology has become a staple of everyday life and has found widespread use in business, education, and telemedicine. Despite this, there have been few empirical studies investigating the neural correlates of interactions during this form of communication. This study investigates the neural mechanisms of face processing during online video-conferencing, employing functional near-infrared spectroscopy (fNIRS). We synchronised presentation and acquisition of fNIRS data across labs in the UK and US using custom Python software and a third-party computer system. Using this framework, we examine how different presentations of faces (live-online vs. static images) influence social cognition and inter-brain coupling (IBC) in a videoconferencing context. Forty participants (20 dyads) engaged in online sessions where they viewed either dynamic video feeds or static images of their partners' faces. In line with our hypotheses, our findings did not show preferential activity in the right supramarginal gyrus during the observation of live online faces compared to static faces, whilst marginally higher IBC was found between the angular gyri during the observation of live faces compared to static. Findings suggest that on-line studies of social interaction provide a relevant field of investigation.
Title of Dataset: Investigating face processing in online interactions via UK-US Hyperscanning using fNIRS.
Author Information
Name: Uzair Hakim
Institution: University College London
Address: MALET PLACE ENGINEERING BUILDING. GOWER STREET, LONDON, WC1E 6BT
Email: uzair.hakim.17@ucl.ac.uk
Principal Investigator Information
Name: Joy Hirsch
ORCID: 0000-0002-1418-6489
Institution: Yale School of Medicine
Address: 300 George Street, New Haven, CT, 06511, USA
Email: joy.hirsch@yale.edu
Principal Investigator Information
Name: Ilias Tachtsidis
Institution: University College London
Address: MALET PLACE ENGINEERING BUILDING. GOWER STREET, LONDON, WC1E 6BT
Email: i.tachtsidis@ucl.ac.uk
Author/Alternate Contact Information
Name: J. Adam Noah
ORCID: 0000-0001-9773-2790
Institution: Yale School of Medicine
Address: 300 George Street, New Haven, CT, 06511, USA
Email: adam.noah@yale.edu
Date of data collection: Approximate collection dates are 2022-01-01 through 2025-02-19.
Geographic location of data collection:
300 George Str, New Haven, CT, United States.
Alexandra House, 17-19 Queen Square, London WC1N 3AZ
SHARING/ACCESS INFORMATION
Licenses/restrictions placed on the data: None.
Links to publications that cite or use the data: None
Links to other publicly accessible locations of the data: None
Links/relationships to ancillary data sets: None
Was data derived from another source? No
Recommended citation for this dataset:
Hakim, Uzair and Noah, J Adam and Zhang, Xian and Gunasekara, Natalie and Hamilton, Antonia and Pinti, Paola and Tachtsidis, Ilias and Hirsch, Joy, "Investigating Face Processing in Online Interactions via UK-US Hyperscanning", Imaging Neuroscience, 2025
DATA & FILE OVERVIEW
For this data set, we have included five files: 1) UCL.tar; 2) Yale.tar these tar files include all raw and exported data collected during the experiment per location; 3) this README.md file; 4) FileList.txt, which is a list of files in the zipped data folders, and 5) RunOrder.txt.
During data collection subjects completed data recording on a single visit with a human partner.
File List:
The types of files included are briefly listed below. For full details on names and details specific to each file, see DATA-SPECIFIC INFORMATION section.
- FNIRS data files: csv files containing oxyhemoglobin, deoxyhemoglobin, and total concentration for each channel at each time point. Data were collected with a 6msec sample time per channel.
- FNIRS channel location files: csv file containing MNI coordinates for each fNIRS channel.
- Experiment order: txt file containing the order stimulus was presented to each participant.
For each participant there are four conditions.
Condition 1 = Live Face, Mask On
Condition 2 = Live Face, Mask Off
Condition 3 = Static Face Mask On
Condition 4 = Static Face, Mask Off
In all cases, the stimulus is a view of the partner’s face, through a compute screen.
Additional related data collected that was not included in the current data package: None.
Are there multiple versions of the dataset? No
If yes, name of file(s) that was updated:
Why was the file updated?
When was the file updated?
METHODOLOGICAL INFORMATION
Description of methods used for collection/generation of data:
Paradigm
The participants were seated 70cm from the screen. Participants gazed at either, a live-video feed of their partner, presented via Zoom, or a static image of their partner. Participants were asked to either wear a facemask or take it off according to the condition. In all conditions partner faces were maximised. Participants were instructed to gaze at their partners face.
FNIRS data collection
Data were collected via fNIRS while individuals viewed another human partner’s face.
fNIRS Data were collected via a multichannel continuous-wave system (LABNIRS, Shimadzu Corporation, Kyoto, Japan). At Yale this consisted of forty emitter-detector optode pairs. Whilst at UCL this consisted of 28 emitter-detector pairs.
During the task, optodes were connected to a cap placed on the participant’s head based on size to fit comfortably. For consistency in cortical coverage, the middle anterior optode was placed 2 cm above the nasion. At UCL the top of the cap was positioned at Cz of the participant. After cap placement, hair was cleared from optode holders using a lighted fiber-optic probe (Daiso, Hiroshima, Japan) prior to optode placement. Optodes were arranged in a matrix, contacting the scalp, enabling acquisition of 134 channels at Yale and 88 channels at UCL. After optode placement and prior to beginning the experiment, signal-to-noise ratio was assessed by measuring attenuation of light for each channel, with adjustments made as needed(Noah et al., 2015; Tachibana et al., 2011).
FNIRS signal acquisition, optode localization, and signal processing were similar to methods described previously(Dravida et al., 2020; Hirsch et al., 2017, 2022; Kelley et al., 2021; Noah et al., 2020).
Recording of optode locations
Locations of optodes and electrodes were recorded for each participant using the Structure Sensor scanner (Boulder, CO, USA) which creates a 3D model (.obj file) of the participant’s head and cap at Yale.
At UCL a Polhemus Liberty electromagnetic digitizer was used.
At Yale locations of the standard anatomical landmarks nasion, inion, cz, t3, and t4 as well as optode locations were manually placed on the 3D model using MATLAB. Locations were then corrected for cap drift using custom MATLAB scripts which rotated optode locations around the Montreal Neurological Institute (MNI) X-axis from left ear towards the midline (Eggebrecht et al., 2012; Okamoto & Dan, 2005). This was done to bring the cz optode in line with the anatomical cz according to original placement to account for stereotyped tilting of the cap towards the left ear that could occur during optode removal.
At UCL locations recorded using the Polhemus system were processed using custom scripts in MATLAB to obtain MNI coordinates corresponding to optode locations.
Yale consisted of 128 channels.
UCL consisted of 88 channels with the exception of three participants (S19, S20 and S21) who have 76 channels.
Environmental/experimental conditions:
During the experiment, the overhead lights of the room were extinguished. An experimenter was present out of the view of the participant during each run.
Methods for processing the data: Data included in the .tar file are raw, unprocessed data.
Instrument- or software-specific information needed to interpret the data:
Data is presented in a generic text file format, making them broadly accessibly for analysis in a wide array of software. The software utilized by the lab are described here.
fNIRS data were collected via a multichannel continuous-wave LABNIRS system, producing OMM files and converted to text files which can be analyzed using MathWorks MATLAB with the NIRS-SPM(Ye et al., 2009) package.
Standards and calibration information, if appropriate:
During experimental set up, optode holders were cleared of hair using a lighted fiberoptic wand. This ensured that scalp contact was made. After fNIRS optode placement and prior to beginning the experiment, signal-to-noise ratio was assessed by measuring attenuation of light for each channel, with manual adjustments made as needed.
Describe any quality-assurance procedures performed on the data:
Optode connectivity were reviewed and adjusted prior to starting the experiment and were further monitored over the course of the experiment so that connectivity issues could be addressed between runs if needed.
People involved with sample collection, processing, analysis, and/or submission: Dr. Uzair Hakim carried out data collection, conducted analyses on the data, and produced this document. Dr. J. Adam Noah was responsible for data collection at Yale. He also assisted in the design of the experiment. Dr. Xian Zhang assisted in data analysis and software upkeep, as well as produced the code responsible. Dr. Joy Hirsch and Dr. Ilias Tachtsidis oversaw the project.
DATA-SPECIFIC INFORMATION
FileList.txt is an organizational table indicating which files are present.
- Each column corresponds to a subject.
- Each row corresponds to a file.
- Values in the table are binary indicating the presence or absence of each file for each subject.
Files are organised as follows:
The root data folder contains a folder for data at each site, UCL and Yale.
Within data/UCL are the data from participants at UCL. The fNIRS data is one single .csv file. The channel locations are a text file named MNI.
Within data/Yale are the data from participants at Yale. The fNIRS data are 8 .csv files. Each .csv file is a seperate condition run for the experiment. The condition is given in the RunOrder.txt file. The channel locations are a text file named MNI.
fNIRS data
The format of the name of fNIRS files:
SUBJECTID_DATE_TIME.csv
A description of the contents of fNIRS files:
Each row is a sample.
- Column 1 is time in seconds.
- Column 2 is trigger indicating the onset of a task condition starting.
- Column 3 is empty (no information)
- Column 4 is the oxyhemoglobin concentration of ch1.
- Column 5 is the de-oxyhemoglobin concentration of ch1.
- Column 6 is the total-oxyhemoglobin concentration of ch1.
- Column 7 is the oxyhemoglobin concentration of ch2.
- Column 8 is the de-oxyhemoglobin concentration of ch2.
- [...]
- The final column is the total-oxyhemoglobin concentration of ch128 for Yale and ch88 for UCL.
Columns indicated by […] continue in the pattern of three columns per channel corresponding to oxyhemoglobin, de-oxyhemoglobin, and total-oxyhemoglobin concentration in that order. There are a total of 128 channels for Yale and 88 for UCL.
The format of the name of fNIRS channel location files:
MNI.txt
A description of the contents of fNIRS channel location files:
- Columns 1, 2, and 3 correspond to the MNI X – Y – Z coordinates, respectively.
- Rows correspond to channels.
References
Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21.
Dravida, S., Noah, J. A., Zhang, X., & Hirsch, J. (2020). Joint Attention During Live Person-to-Person Contact Activates rTPJ, Including a Sub-Component Associated With Spontaneous Eye-to-Eye Contact. Frontiers in Human Neuroscience, 14. https://doi.org/10.3389/fnhum.2020.00201
Eggebrecht, A. T., White, B. R., Ferradal, S. L., Chen, C., Zhan, Y., Snyder, A. Z., Dehghani, H., & Culver, J. P. (2012). A quantitative spatial comparison of high-density diffuse optical tomography and fmri cortical mapping. Neuroimage, 61(4), 1120–1128. https://doi.org/10.1016/j.neuroimage.2012.01.124
Hirsch, J., Zhang, X., Noah, J. A., Dravida, S., Naples, A., Tiede, M., Wolf, J. M., & McPartland, J. C. (2022). Neural correlates of eye contact and social function in autism spectrum disorder. PLOS ONE, 17(11), e0265798. https://doi.org/10.1371/journal.pone.0265798
Hirsch, J., Zhang, X., Noah, J. A., & Ono, Y. (2017). Frontal, temporal, and parietal systems synchronize within and across brains during live eye-to-eye contact. NeuroImage, 157, 314–330. https://doi.org/10.1016/j.neuroimage.2017.06.018
Kelley, M., Noah, J. A., Zhang, X., Scassellati, B., & Hirsch, J. (2021). Comparison of human social brain activity during eye-contact with another human and a humanoid robot. Frontiers in Robotics and AI.
Noah, J. A., Ono, Y., Nomoto, Y., Shimada, S., Tachibana, A., Zhang, X., Bronner, S., & Hirsch, J. (2015). fMRI Validation of fNIRS Measurements During a Naturalistic Task. Journal of Visualized Experiments : JoVE, 100. https://doi.org/10.3791/52116
Noah, J. A., Zhang, X., Dravida, S., Ono, Y., Naples, A., McPartland, J. C., & Hirsch, J. (2020). Real-time eye-to-eye contact is associated with cross-brain neural coupling in angular gyrus. Frontiers in Human Neuroscience, 14, 19. https://doi.org/10.3389/fnhum.2020.00019
Okamoto, M., & Dan, I. (2005). Automated cortical projection of head-surface locations for transcranial functional brain mapping. Neuroimage, 26(1), 18–28. https://doi.org/10.1016/j.neuroimage.2005.01.018
Parker, T. C., Zhang, X., Noah, J. A., Tiede, M., Scassellati, B., Kelley, M., McPartland, J., & Hirsch, J. (2023). Neural and visual processing of social gaze cueing in typical and ASD adults. medRxiv, 2023.01. 30.23284243.
Ye, J. C., Tak, S., Jang, K. E., Jung, J., & Jang, J. (2009). NIRS-SPM: Statistical parametric mapping for near-infrared spectroscopy. NeuroImage, 44(2), 428–447. https://doi.org/10.1016/j.neuroimage.2008.08.036
Baltrušaitis, Tadas, Peter Robinson, and Louis-Philippe Morency. "Openface: an open source facial behavior analysis toolkit." 2016 IEEE winter conference on applications of computer vision (WACV). IEEE, 2016.
Human subjects data
All participants in this study provided explicit written consent through a standardized informed consent process. This consent explicitly authorized the collection of their data for research purposes, publication of fully de-identified data in public repositories as well as open-access sharing of anonymized findings. Shared data do not contain any names, addresses (physical or email), phone numbers, social security/ID numbers, account credentials, or facial features in visual data. Data shared are named and numbered with respect to a unique code (e.g., S001, S002) that cannot be traced back to their identity. Files incorporate relevant metadata such as project name, data type, and collection date in a standardized format explained in the Readme file.
