Skip to main content
Dryad

A mechanistic account of visual discomfort

Cite this dataset

Penacchio, Olivier; Otazu, Xavier; Wilkins, Arnold J.; Haigh, Sarah M. (2023). A mechanistic account of visual discomfort [Dataset]. Dryad. https://doi.org/10.5061/dryad.g79cnp5kw

Abstract

Much of the neural machinery of the early visual cortex, from the extraction of local orientations to contextual modulations through lateral interactions, is thought to have developed to provide a sparse encoding of contour in natural scenes, allowing the brain to process efficiently most of the visual scenes we are exposed to. Certain visual stimuli, however, cause visual stress, a set of adverse effects ranging from simple discomfort to migraine attacks, and epileptic seizures in the extreme, all phenomena linked with an excessive metabolic demand. The theory of efficient coding suggests a link between excessive metabolic demand and images that deviate from natural statistics. Yet, the mechanisms linking energy demand and image spatial content in discomfort remain elusive. Here, we used theories of visual coding that link image spatial structure and brain activation to characterize the response to images observers reported as uncomfortable. Our biologically based neurodynamic model of the early visual cortex included excitatory and inhibitory layers to implement contextual influences. We found three clear markers of aversive images: a larger overall activation in the model, a less sparse response, and a more unbalanced distribution of activity across spatial orientations. When the ratio of excitation over inhibition was increased in the model, a phenomenon hypothesised to underlie interindividual differences in susceptibility to visual discomfort, the three markers of discomfort progressively shifted towards values typical of the response to uncomfortable stimuli. Overall, these findings propose a unifying mechanistic explanation for why there are differences between images and between observers, suggesting how visual input and idiosyncratic hyperexcitability give rise to abnormal brain responses that result in visual stress.

README: Data for the article "A mechanistic account of visual discomfort"

Penacchio, Olivier and Otazu, Xavier and Arnold J., Wilkins and Srah M., Haigh, March 2023

The article can be found at: https://www.frontiersin.org/articles/10.3389/fnins.2023.1200661/full

This repository contains all the psychophysical data of the experiments, all the data and code to run the numerical simulations, all the data and code to analyse the neurodynamic model output, and the code for the statistical inference.

Description of the Data and file structure

Summary:

  • File count: 34
  • Total file size: 897 MB
  • Range of individual file sizes: 1 kB - 196.539 kB
  • File formats: Excel *.xlsx/Matlab *.mat

File naming:

General: The experiments (Experiment 1, and Experiment 2) were done with four sets of stimuli named architecture 1, architecture 2, art1, and art 2. Each data file includes the name of the set it corresponds to (e.g., the ratings for discomfort for each observer and the raw metrics for architecture 1 is in architecture1_inh_1.0_TallLight_March23.xlsx).

  • The Excel files (*.xlsx): give the discomfort ratings for each image in each set (architecture 1 and 2, N = 74, art 1 and 2, N = 50), each observers (with corresponding ID as number; architecture 1 and 2, N = 10, art 1, N = 53, and art 2, N = 79), and all the metrics -including the three markers- computed for all the frequency channels (scales 1 to 12, written as _scN in table columns, and the whole population, written as _all) in two forms: raw version for the metrics in files ending with "_TallLight_March23.xlsx" and normalized to mean = 0 and sd = 0.5 in files ending with "_TallLight_standGelman_March23.xlsx". For example, the data (psychophysics and normalized metrics) for set art 1 is provided by file "art1_inh_1.0_TallLight_standGelman_March23.xlsx".
  • The Matlab files (*.mat): for files including "min_freq_32_epsilon_1.3_kappay_1.35_normal_output": the raw activity of the model for a given image in a given set; this activity is described by four *.mat files that consist each of a 20x1 cell array (i.e., different membrane times) in which each cell entry is a 256x256x12x4 matrix (256x256 is the size of the image, 12 the number of frequency channels and 4 the number of spatial orientations considered) that corresponds to the activity of the units of the model for the corresponding membrane time.

N.B.: as these files are big (a total of ~580.000 kB for each image) we only provide one example in this repository (the first image of the first set considered, architecture 1, so starting with architecture1_256_1_min_freq_32_...); all the files are kept on our University servers and available under reasonable demand.

For files including "_ESH_metrics_for_each_gain.mat": give the three markers (E: activation, S: sparseness, and H: isotropy) for all the images and each set and each value of gain for the inhibitory layer (with 9 possible values of gain regularly spaced between 0 and 1, 1 being the value for the reference model, see text for details). For examples, "architecture1_ESH_metrics_for_each_gain.mat" gives the E, S, and H metrics for all the images (N = 74) in set architecture 1 (so, e.g., E is a 74x9 matrix).

For files starting with "stimuli_above_0.85_": Simply gives the number of images above a given discomfort threshold for each of the nine values of gain.

For files starting with "residuals_VR2015_": These files give the Fourier residuals (see Penacchio and Wilkins, Visual discomfort and the spatial distribution of Fourier energy, Vision Research, 2015) for the four set of images in a matrix called "residual" of size 1xnumber of images in the set (e.g., 74 for residuals_VR2015_architecture1_luminance_Jan23.mat).

  • **The zipped files (.zip): give the stimuli for the experiments, namely three channels colour images (.tif) for architecture 1 and 2 (N = 74 for each set), the corresponding luminance input to the model (.png), and the luminance stimuli and input to model for art 1 and 2 (.png, N = 50 in each set).

Description of tables:

All the tables (Excel files, *.xlsx: architecture1_inh_1.0_TallLight_March23.xlsx, architecture2_inh_1.0_TallLight_March23.xlsx, art1_inh_1.0_TallLight_March23.xlsx, art2_inh_1.0_TallLight_March23.xlsx, architecture1_inh_1.0_TallLight_standGelman_March23.xlsx, architecture2_inh_1.0_TallLight_standGelman_March23.xlsx, art1_inh_1.0_TallLight_standGelman_March23.xlsx, art2_inh_1.0_TallLight_standGelman_March23.xlsx) are in the same format. Each row correspond to the "event" of a participant to the experiment rating one image in one of the sets of images. The 147 columns correspond to the following variables:

  1. Subject: a number corresponding to the participant (between 1 and 10 for architecture 1 and 2, between 1 and 58 for art 1, and between 1 and 80 for art 2)
  2. Image: the reference number for the image rated for discomfort (between 1 and 74 for architecture 1 and 2, between 1 and 50 for art 1 and 2)
  3. Excluded: a flag for excluded participants (0 non-excluded/1 excluded). (Some participants were excluded because they showed no variability in their responses.)
  4. Rating: the participant's rating of discomfort on a Likert scale for the trial. Higher values correspond to greater self-reported discomfort.
  5. Columns 5-147: all the computed metrics. The suffixes "_scN", with N = 1,...,12, and "all" stand for the metrics applied to the activity of the neurodynamic network restricted to, respectively, channel 1, 2, ..., 12, and to the metric computed with the full model population for "all".

File list:

*architecture1_256_1_min_freq_32_epsilon_1.3_kappay_1.35_normal_output_2_isaturation_gain_control_iFactor_OFF.mat
*architecture1_256_1_min_freq_32_epsilon_1.3_kappay_1.35_normal_output_2_saturation_gain_control_curv_OFF.mat
*architecture1_256_1_min_freq_32_epsilon_1.3_kappay_1.35_normal_output_2_saturation_gain_control_curv_ON.mat
*architecture1_256_1_min_freq_32_epsilon_1.3_kappay_1.35_normal_output_2_saturation_gain_control_iFactor_ON.mat
*architecture1_ESH_metrics_for_each_gain.mat
*architecture1_inh_1.0_TallLight_March23.xlsx
*architecture1_inh_1.0_TallLight_standGelman_March23.xlsx
*architecture1_metrics_for_all_gains.mat
*architecture2_ESH_metrics_for_each_gain.mat
*architecture2_inh_1.0_TallLight_March23.xlsx
*architecture2_inh_1.0_TallLight_standGelman_March23.xlsx
*architecture2_metrics_for_all_gains.mat
*art1_ESH_metrics_for_each_gain.mat
*art1_inh_1.0_TallLight_March23.xlsx
*art1_inh_1.0_TallLight_standGelman_March23.xlsx
*art1_metrics_for_all_gains.mat
*art2_ESH_metrics_for_each_gain.mat
*art2_inh_1.0_TallLight_March23.xlsx
*art2_inh_1.0_TallLight_standGelman_March23.xlsx
*art2_metrics_for_all_gains.mat
*residuals_VR2015_architecture1_luminance_Jan23.mat
*residuals_VR2015_architecture2_luminance_Jan23.mat
*residuals_VR2015_art1_luminance_Jan23.mat
*residuals_VR2015_art2_luminance_Jan23.mat
*stimuli_above_0.85_threshold_Architecture1.mat
*stimuli_above_0.85_threshold_Architecture2.mat
*stimuli_above_0.85_threshold_Art1.mat

Code/scripts:

In Zenodo Software repository:

*The R script for the statistical inference is "statistical_analysis_A_mechanistic_account_of_visual_discomfort_March2023.R"; in this file please set the working directory to the folder that contains all the data (line 6, directory <- " your folder ").

  • The Matlab code pipeline to run the numerical simulations with the neurodynamic system is in a zipped folder "sparse discomfort Matlab code.zip".

*The Matlab script to extract the metrics from the output of the neurodynamic model is "extract_metrics_from_output_of_dynamical_model.m".

Figures/Supplementary data/Supplementary results

In Zenodo Supplemental repository:

-All five figures in the manuscript (in *.png format):
*figure1_Mechanistic_account_discomfort_Penacchio_et_al_2023.png
*figure2_Mechanistic_account_discomfort_Penacchio_et_al_2023.png
*figure3_Mechanistic_account_discomfort_Penacchio_et_al_2023.png
*figure4_Mechanistic_account_discomfort_Penacchio_et_al_2023.png
*figure5_Mechanistic_account_discomfort_Penacchio_et_al_2023.png

-A visualization of all the rating of discomfort for all the observers and all the sets:
pdf file *Supplementary File Raw A mechanistic account of visual discomfort Penacchio et al 2023.pdf

-A visualization of all the values of the three main makers for all the images in set architecture 1:pdf file *Supplementary File all markers values for Architecture 1 A mechanistic account of visual discomfort Penacchio et al 2023.pdf

Methods

Psychophysics: the ratings for discomfort were collected online using Qualtrics, in agreement with COVID-19 protocols. The protocol was approved by the Institutional Review Board at the University of Nevada, Reno (333057), and was conducted in accordance with the Declaration of Helsinki.

Numerical simulation: the simulations of the neurodynamic model of the early visual system were done in Matlab. Please see README.md file for details on Experiment 1 and Experiment 2.

Usage notes

The software required for the statistical analysis in Experiment 1 and 2 is R (Team, R. C. (2020). "R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria.) 

The software required for running the simulations, extracting the markers of discomfort and generating the figures is Matlab (proprietary; MATLAB and Statistics Toolbox Release 2019b, 9.7.0.1190202 (R2019b). Natick, Massachusetts, The MathWorks Inc.). An open-source alternative to run the Matlab routines is Octave (https://octave.org/).

Funding

Maria Zambrano Fellowship for attraction of international talent for the requalification of the Spanish university system—NextGeneration EU (ALRC)

NIH COBRE, Award: PG20GM103650

NIH R15 AREA grant, Award: MH122935

Spanish Ministerio de Ciencia e Innovación, Gobierno de España, Award: PID2020-118254RB-I00/AEI/10.13039/501100011033

Agencia de Gestió d’Ajuts Univesitaris i de Recerca (AGAUR), Award: 2021-SGR-01470

CERCA Programme / Generalitat de Catalunya