Skip to main content
Dryad

Separable processes for live “in-person” and “zoom-like” faces

Data files

Jul 09, 2024 version files 2.65 GB

Abstract

Increased reliance on Zoom-like (webcam) platforms for interpersonal communications has raised the question of how this new virtual format compares to real face-to-face interactions. This question is also relevant to current models of face processing. Neural coding of simulated faces engages feature-selective processes in the ventral visual stream and two-person live face-to-face interactions engage additional face processes in the lateral and dorsal visual streams. However, it is not known if and/or how live in-person face processes differ from live virtual face processes because the faces and tasks are essentially the same. Current views of functional specificity predict no neural difference between the virtual and live conditions. Here we compare the same live faces viewed both over a video format and in person with measures of functional near-infrared spectroscopy (fNIRS), eye tracking, pupillometry, and electroencephalography (EEG). Neural activity was increased in dorsal regions for in-person face gaze and was increased in ventral regions for virtual face gaze. Longer dwell times on the face, increased arousal indexed by pupil diameter, increased neural oscillation power in the theta band, and increased cross-brain coherence were also observed for the in-person face condition. These findings highlight the fundamental importance of real faces and natural interactions for models of face processing.