************************************************* Readme ************************************************* This readme file explains what sort of files are saved in this repository. The files saved here are used to generate the figures in the paper titled "Human visual exploration reduces uncertainty about the sensed world". In this study the participants eyes' were tracked using the EyeLink 1000 Eye-tracker as they perform the "Scene construction task" described in the paper (see below for details). In total, 22 participants performed this task. Each subject underwent a pre-training phase consisted of twenty trials; ensuring that they got themselves acquainted with the experimental setup. The data from the pre-training phase is not included in the analysis. Then they performed five blocks of this task. The first two blocks were training blocks and the following three blocks were testing blocks. Each block consisted of a hundred trials. The collected gaze coordinates and the other relevant files from each block are saved in folders named as 'Subject X' where X is the subject number. See contents for details. ************************************************ Contents ************************************************ Subjects are numbered from 1 to 22. The folders are named as "Subject X", where X corresponds to the subject number. Under each folder below files from the pre-training phase, training and testing blocks can be found. - demo.asc : The saved gaze coordinates during performing the task in .asc format. - TrialAsci.mat : Matlab workspace that contains all the recorded gaze coordinates and registered messages during each of the 100 trials in a block. - o.mat : A cell workspace that contains the sequences of discrete observations for each trial under a block of trials. Each cell contains the sequences of observations for each trial. For example o{23} would show where the subjects looked at and what they saw in those locations while performing the scene construction task on the 23rd trial. The first row of observations o{23}(1,:) shows what objects the participants observed sequentially. Each number in this vector corresponds to an object. See below for which number correspond to what object. The second row of observations o{23}(2,:) shows which locations the participants looked at sequentially. See below for which number correspond to where in the scene. - Score : This vector contains how much points the subjects had after each trial. Notice that this vector has 101 entries. Because the subjects are given 100 points in the beginning of each block of trials the first entry in the vector "Score" is always 100. The score then changes based on the subject's performance as the game progresses. - Success : This vector contains ones and zeros. "Ones" indicate correctly categorised trials, whereas "zeros" indicate incorrectly categorised trials. Numbers What --------- ------ 1 ----> Null 2 ----> Bird 3 ----> Seed 4 ----> Cat 5 ----> Right feedback 6 ----> Wrong feedback Numbers Where --------- ---------------- 1 ----> Central Fixation 2 ----> Top left quadrant 3 ----> Bottom left quadrant 4 ----> Top right quadrant 5 ----> Bottom right quadrant 6 ----> 'Flee' choice location 7 ----> 'Feed' choice location 8 ----> 'Wait' choice location For example if the discrete observations for the 23rd trial are as below: o{23}(1,:) = [ 1 2 3 5 ] % what o{23}(2,:) = [ 1 3 5 7 ] % where it would mean that the subject first looked at the central fixation (location 1) and observed 'null' (object 1). Then the subject looks at the bottom left quadrant (location 3) and sees the 'bird' (object 2). Subsequently the subject looks at the bottom right quadrant (location 5) and sees the 'seed' (object 3). Now the subject knows that the scene is of 'Feed' category because the 'seed' is next to the 'bird'. The subject reports his/her beliefs by choosing the choice location associated with 'Feed' (location 7) gets a right feedback (object 5). See below for the rules and details of the 'Scene construction task'. ***************************************** Scene construction task **************************************** --------------------- | | | | 2 | 4 | | | | | | | --------- 1 --------- | | | | | | | 3 | 5 | | | | --------------------- 6 7 8 (Flee) (Feed) (Wait) The scene construction task is a gaze contingent task where the subjects explore the scene with their eyes. The subjects start exploring the scene from the central fixation, which corresponds to the number "1" on the scene. Looking at the fixation cross would trigger the trial. Then the scene to be explored is displayed on the screen. There are four locations (2, 3, 4, 5) that can hold the following objects: 'null', 'bird', 'seed', 'cat'. The objects in these locations are masked in the beginning and looking at these locations would disclose the objects (above) that each quadrant holds. In this task each scene is associated with a category. These categories are 'Flee', 'Feed' and 'Wait'. The relative locations of the objects determine the category of the scene. Once one is sure about the category of the scene, one can report his/her beliefs by choosing the locations associated with the categories. Locations 6, 7 and 8 are choice locations associated with the categories Flee, Feed and Wait. One can choose these locations by making button presses using a button box. *************************** Messages registered during trials in the .asc file *************************** The beginning and ending of each trial is registered in the .asc files. For example, in the .asc file the beginning of ninth trial is registered with the message "trial_start9" and the ending of a trial is registered with the message "trial_stop9". Once the trial starts a fixation cross appears on the screen. Fixation cross is registered with the message "FixCross" in the .asc file. This fixation cross was gaze contingent and upon looking at it the scene to be explored was displayed on the screen. The purpose of the fixation cross was to ensure that the subjects start exploring the scene from the fixation cross and not any other location. Upon looking at the fixation cross the message "VeiledScene" is registered in the .asc file. This means that the scene to be explored is displayed on the screen. Once the scene is displayed on the screen the subjects are free to explore it in any way they want. Upon looking at the grey dots within a black circle in the scene would display the masked objects (See figure 1 in the paper for details). These objects are: 'null', 'bird', 'seed', 'cat'. For example, if the masked object on top left quadrant is 'bird', then looking at the top left quadrant would register the following message on the .asc file: "print bird 2". The message followed by "print" shows which object is seen in what location on the scene. The registered object in this case is 'bird' and the location at which this object is seen is "top left quadrant" which corresponds to the location 2 in the scene. The locations and the seen objects are registered only once, unless one looks at a different location and revisits the same location again. If the scene being explored is of 'Flee' category and if one chooses a choice location associated with 'Flee' (location 6), this would mean that one made an incorrect categorisation. This is registered in the .asc file with the following message: "print false 8". This means that one made an incorrect categorisation by choosing the location 8, which is associated with 'Wait'. Following an incorrect categorisation the subjects are given both audio (long beep sound) and visual feedback that indicate the scene was incorrectly categorised. These feedbacks are registered with the following message in the .asc file: "Incorrect Feedback". After making an incorrect categorisation one can still explore the scene and make button presses to categorise the scene, given that the time-threshold for that trial is not yet exceeded. Note that if the first categorisation in a trial is incorrect, that trial would be registered as incorrect even one make a correct categorisation in the same trial following an incorrect categorisation. If the scene being explored is of 'Flee' category and if one chooses the location associated with 'Flee', namely location 6, then the following messsage is registered in the .asc file: "print true 6". This means that one made a correct categorisation by choosing the choice location associated with 'Flee' (location 6), which is associated with 'Flee'. Following a correct categorisation the subjects are given both audio (short beep sound) and visual feedback that indicate that the scene was correctly categorised. These feedbacks are registered with the following message in the .asc file: "Correct Feedback". A correct categorisation concludes the trial. If one fails to make a categorisation within the given time threshold then the trial counts as an incorrect categorisation and the subjects are given the following the messages: "Too Slow Visual Feedback", "Too Slow Audio Feedback". A trial concludes either when the time-threshold is exceeded or when one makes a correct categorisation. Once a trial concludes a message is registered in the .asc file depending on whether the scene is correctly or incorrectly categorised. If the scene is correctly categorised then the message "+2" is registered, conversely if the scene is incorreclty categorised then the message "-4" is registered, indicating how many points one scored as a result of their categorisation. After this message the total score at the end of the trial is registered with the following message: "Score: X" where X is the total score. After this a new trial begins with the message "trial_startX" where X is the indicates which trial it is. If it is the tenth trial it would be "trial_start10" See Figure 1 in the paper for details ********************************* Performance measures (See figure 2) ************************************ Mean score per trial : This is computed as the difference between how much score one had in the beginning and how much score one had at the end of the trial. The differences in scores are kept in a vector and averaged over subjects and trials under a block. We incentivised the participants to sample locations that were more informative using a sampling cost. The penalty of attending to the n-th square was given by -0.25Śn. The cost of exploration stacked cumulatively as the exploration proceeded; i.e., looking at two squares would cost -0.25+(-0.5)=-0.75. One would be rewarded 2 points for making a correct categorisation and penalized 4 points for making an incorrect categorisation. Percentage correct : This is computed as the ratio between the correctly categorised trials and all trials under a block. This ratio is then averaged over subjects. Mean saccades per trial : In this paradigm the number of saccades increases by one when one looks at one of the four quadrants in the scene. The button presses did not count as saccades. Mean saccades per trial was computed as how many saccades one makes until one categorises the scene. In the case one makes an incorrect categorisation, one is still allowed to explore the scene and categorise again if there is still time left. In the case of multiple categorisations (i.e. first categorisation being incorrect), the saccades until the first categorisation (and not the saccades made after the first categorisation) are taken into account when calculating this performance measure. Mean time between saccades : This measure shows how much time it takes to look at two different locations in succession in seconds. The button presses are not taken into account when calculating mean time between saccades as button presses do not count as saccades. The timestamps when the objects were seen are registered in the .asc file. ********************************************** Simulations *********************************************** The simulation routines are available as Matlab code in the SPM academic software: http://www.fil.ion.ucl.ac.uk/spm/. An example model inversion routine can be used (and modified) by downloading the DEM Toolbox and invoking DEM_demo_MDP_fit. For details please see Friston et al. 2007., Schwartenbeck and Friston 2016. One can reproduce the original Scene construction and categorisation simulations (Mirza et al. 2016) by invoking DEM_demo_MDP_search.m For the model inversion scripts for the scene construction task and for any other scripts used to generate the figures please contact me using the my contact information given at the bottom. ******************************************** Excluded trials ********************************************* Comparing the coordinates that were looked at while performing the scene construction task with the gaze coordinates saved in the ascii file through Eyelink 1000 eye-tracker, we found that the stimuli in 20 out of 2200 trials may have been triggered faultily. Poor eye-tracker calibration, moving the head while performing the task, blinking etc. may be the possible causes of faulty gaze coordinate registration. These trials have been excluded from all analyses. Below is a list of all the trials excluded from all analysis. Subject 19: - Trial 81 - Training block 2 - Trials 28, 50, 68 - Testing block 1 - Trial 21 - Testing block 2 - Trial 92 - Testing block 3 Subject 20: - Trial 63 - Training block 1 - Trials 28, 35 - Training block 2 - Trials 40, 66, 93 - Testing block 1 - Trial 62 - Testing block 2 Subject 21: - Trials 41, 96 - Training block 1 - Trial 58 - Training block 2 - Trial 32 - Testing block 1 - Trials 52, 96 - Testing block 2 - Trial 67 - Testing block 3 ********************************************** Contacting ************************************************ Contact me at: Muammer Berk Mirza Wellcome Trust Centre for Neuroimaging at University College London 12 Queen Square - London - United Kingdom - WC1N 3BG Phone: Work : +44 (0)20 3448 4362 Mobile : +44 (0)74 6471 4739 E-mail: muammer.mirza.15@ucl.ac.uk berkmirza@gmail.com ********************************************** References ************************************************ Mirza MB, Adams RA, Mathys CD, Friston K. Scene Construction, Visual Foraging, and Active Inference. Front Comput Neurosci. 2016;10:56. Friston K, Mattout J, Trujillo-Barreto N, Ashburner J, Penny W. Variational free energy and the Laplace approximation. NeuroImage. 2007;34(1):220- 234. Schwartenbeck P, Friston K. Computational Phenotyping in Psychiatry: A Worked Example. eNeuro 2016:3(4) https://doi.org/10.1523/ENEURO.0049-16.2016