Skip to main content
Dryad

Dataset: Age effects in emotional memory and associated eye movements

Cite this dataset

Stam, Daphne et al. (2022). Dataset: Age effects in emotional memory and associated eye movements [Dataset]. Dryad. https://doi.org/10.5061/dryad.3j9kd51p2

Abstract

Mnemonic-enhanced memory has been observed for negative events. Here, we investigate its association with spatiotemporal attention, consolidation, and age. An ingenious method to study visual attention for emotional stimuli is eye tracking. Twenty young adults and twenty-one older adults encoded stimuli depicting neutral faces, angry faces, and houses while eye movements were recorded. The encoding phase was followed by an immediate and delayed (48 h) recognition assessment. Linear mixed model analyses of recognition performance with group, emotion, and their interaction as fixed effects revealed increased performance for angry compared to neutral faces in the young adults group only. Furthermore, young adults showed enhanced memory for angry faces compared to older adults. This effect was associated with a shorter fixation duration for angry faces compared to neutral faces in the older adults group. Furthermore, the results revealed that total fixation duration was a strong predictor for face memory performance.

Methods

Participants

Forty-one subjects participated in our study. They were recruited by advertisements for participation in an eye-tracker memory experiment. Participants did not receive financial compensation for their participation. Inclusion criteria consisted of (1) 18–30-year age range (young adults group) or 50–90-year age range (older adults group) and (2) an MMSE score above 25.

The young adults and older adults group consisted of 20 participants [7 males (35%); mean age ± SD = 22 ± 2 years, range 18–29] and 21 participants [9 males (43%); mean age ± SD = 69 ± 7 years, range 53–87], respectively. One participant from the older adults group was not included in the eye movement analysis due to technical issues. Participants completed the Addenbrooke’s Cognitive Examination III (ACE-III), which includes the Mini–Mental State Examination (MMSE). All participants had an ACE-III score above 71. 

Eye Tracker and Eye Movement Recordings

Eye movement data were collected during the encoding phase at a sampling rate of 120 Hz using the Tobii eye tracker TX300 and processed with Tobii Studio 3.4.7. During recording, the eye tracker collects raw eye movement data points, which are processed into fixations and used to calculate eye-tracking metrics, by applying a fixation filter to the data. We applied default settings, including the Tobii fixation filter, with a velocity threshold of 0.84 pixels/ms (35 pixels) and a distant threshold (distance between two consecutive fixations) of 35 pixels (default). In short, peak values are identified, i.e., the values that are greater than both of its two closest neighbors. The list of peaks is then processed into fixations, where the start and end points of a fixation are set by two consecutive peaks. The spatial positions of the fixations are calculated by taking the median of the unfiltered data points in that interval. Secondly, the Euclidean distances between all the fixations are calculated and if the distance between two consecutive fixations falls below a second user-defined threshold, the two fixations are merged into a single fixation. The process is repeated until no fixation points are closer to each other than the threshold. A detailed description of the Tobii fixation can be found in the Tobii Studio user manual (https://www.tobiipro.com/siteassets/tobii-pro/user-manuals/tobii-pro-studio-user-manual.pdf).

Statistical Analysis

Behavioral Analyses

Behavioral results were analyzed according to signal detection theory. R-Score Plus was used to calculate d’ for confidence rating designs. D’ was calculated as a function of category (face vs. house), emotion (angry vs. neutral), interval (IR vs. DR), and group (older adults and young adults). We calculated the mean interval between the encoding phase and DR (lag) for every participant.

To evaluate the anticipated outcomes for group differences in d’ in the IR phase, we performed the following general multivariate regression model, which takes repeated measures within subjects into account. Let Yi be a vector with repeated measures for the ith subject (i … N). This general multivariate regression model assumes that Yi satisfies the following regression model: Yi = Xiβ + εi with Xi being a matrix of covariates (e.g., intercept, group, emotion condition, and group x emotion condition), β is a vector of regression coefficients, and εi is a vector of error components with εiN(0, Σ). For the variance/covariance structure Σ of each subject, we considered a compound symmetry and unstructured variance/covariance matrix. Selection of the adequate variance/covariance matrix was based on a likelihood-ratio test. Reference coding was used for group (2 levels: older adults = 1 vs. young adults = 0) and emotion (2 levels: neutral = 1 vs. angry = 0). To evaluate main and interaction effects, Bonferroni-corrected post hoc tests were used. It may be noted that this model is a special case of a linear mixed model (Verbeke and Molenberghs, 2000) and that the mean structure Xiβ (the parameters of interest) can be interpreted as that in a classical ANOVA or regression model.

Second, we performed a similar model but with category ((2 levels: neutral = 1 vs. angry = 0)) instead of emotion as predictor. These analyses were performed for the two different memory stages (IR and DR) separately.

Lastly, we performed a similar model but with intervals (2 levels: IR = 1 vs. DR = 0) for the different conditions (house, face, angry face, neutral face) separately. Finally, a similar model was used with groups (2 levels: older adults = 1 vs. young adults = 0), intervals (2 levels: IR = 1 vs. DR = 0), and group x interval as predictors. All analyses were performed in SPSS.

Eye-Tracker Analyses

Eye movement data were calculated for house, face, and the three areas of interest: mouth, nose, and eyes. For every participant, two indices for eye movement data were recorded: total fixation duration and fixation count. Total fixation duration represents the total time of fixation as it measures the sum of the duration (seconds) for all fixations within an area of interest for all test stimuli throughout the experiment.

Fixation count measures the number of fixations in each area of interest for all test stimuli throughout the experiment. If during the recording the participant leaves and returns to the same media element, this is counted as a new fixation. A detailed description of the metric measures can be found in the Tobii Studio user manual (https://www.tobiipro.com/siteassets/tobii-pro/user-manuals/tobii-pro-studio-user-manual.pdf).

We exported the gaze data from Tobii Studio to SPSS for further analysis. Statistical tests on the gaze data were preceded by a normality check on the distributions of the respective residuals by means of a Shapiro–Wilk test. In case normality could not be assumed, non-parametric tests were performed (Mann–Whitney and Wilcoxon tests).

In order to investigate the association between the behavioral data and eye movements, we performed Spearman correlations. We computed correlations between d’ (IR, DR) and eye tracker data (total fixation duration and fixation count) during encoding for both groups separately (young adults and older adults).

Usage notes

 All analyses were performed in SPSS.

Funding

Sequoia Fund for Research on Ageing and Mental Health, Award: C24/18/095

KU Leuven, Award: C24/18/095