Data from: Short-latency preference for faces in the primate superior colliculus
Cite this dataset
Yu, Gongchen et al. (2023). Data from: Short-latency preference for faces in the primate superior colliculus [Dataset]. Dryad. https://doi.org/10.5061/dryad.b5mkkwhjw
Abstract
Face processing is fundamental to primates and has been extensively studied in higher-order visual cortex. Here we report that visual neurons in the midbrain superior colliculus (SC) display a preference for faces, that the preference emerges within 50ms of stimulus onset – well before “face patches” in visual cortex – and that this activity can distinguish faces from other visual objects with accuracies of ~80%. This short-latency preference in SC depends on signals routed through early visual cortex, because inactivating the lateral geniculate nucleus, the key relay from retina to cortex, virtually eliminates visual responses in SC, including face-related activity. These results reveal an unexpected circuit in the primate visual system for rapidly detecting faces in the periphery, complementing the higher-order areas needed for recognizing individual faces.
README: Data from: Short-latency preference for faces in the primate superior colliculus
https://doi.org/10.5061/dryad.b5mkkwhjw
README for all datasets provided for manuscript:
- Short-latency preference for faces in the primate superior colliculus
- Gongchen Yu* , Leor N. Katz* , Christian Quaia, Adam Messinger, Richard J. Krauzlis
- These authors contributed equally
Correspondence: yugongchen1990@gmail.com, richard.krauzlis@nih.gov
Data are provided in .mat files.
The provided m file PlotFigures.m has been created to load the datafiles and plot.
The folder functionforplot contains all the functions that are needed for PlotFigure.m
Resultant plots match those in the paper.
Readme for each figure is within the PlotFigures.m function.
We also add them here:
Figure 1C
Inside this mat:
'foraveragecategorypsth_individualcategory' - 1*5 cell corresponds to 5object categories: 'face', 'body', 'hand', 'fruit&vegetable', 'humanmade'. In each cell, it is a 222*1001 matrix. Each row of the matrix
represents the time course of normalized firing rate for one neuron, each
column represents the normalized firing rate for a time bin (bin width 20ms, sliding step 1ms).
'psth_bin' - center of the bins, aligned on object onset
Figure 1D
Inside this mat:
'p_forplot' - 222*1001 matrix, each row represents the time course ofANOVA (with the factor of object category) p value for one neuron, eachcolumn represents the p value for each time bin (bin width 20ms, slidingstep 1ms).'preference_forplot' - 222*1001 matrix, each row represents the object category (1,
2, 3, 4, 5 correspond to 'face', 'body', 'hand', 'fruit&vegetable', 'human made') evoking
the highest response for one neuron, each column represents this object preference
for each time bin (bin width 20ms, sliding step 1ms).
'psth_bin' - center of the bins, aligned on object onset
Figure 1E
Inside this mat:
'foraveragecategorypsth_meansubtraction_individualcategory' - 1*5 cell corresponds to 5object categories: 'face', 'body', 'hand', 'fruit&vegetable', 'humanmade'. In each cell, it is a 113*1001 matrix. Each row of the matrix
represents the time course of mean subtracted normalized firing rate for one object selective neuron, each
column represents the normalized firing rate for a time bin (bin width 20ms, sliding step 1ms).
'psth_bin' - center of the bins, aligned on object onset
Figure 1F
Inside this mat:
'Activity_matrix_sorted_visualonly' - 80*150 matrix, each row representsthe mean subtracted normalized firing rate for each visual-only neuron, eachcolumn represents the mean subtracted response to each of the 150 objectexamplars (1 to 30 face, 31 to 60 body, 61 to 90 hand, 91 to 120 fruit &vegetable, 121 to 150 human made).'Activity_matrix_sorted_visualmotor' - 31*150 matrix, each row represents
the mean subtracted normalized firing rate for each visual-motor neuron, each
column represents the mean subtracted response to each of the 150 object
examplars (1 to 30 face, 31 to 60 body, 61 to 90 hand, 91 to 120 fruit &
vegetable, 121 to 150 human made).
Figure 1G
Inside this mat:
'Activity_matrix' - 113*150 matrix, each row representsthe mean subtracted normalized firing rate for each visual-only neuron, eachcolumn represents the mean subtracted response to each of the 150 objectexamplars (1 to 30 face, 31 to 60 body, 61 to 90 hand, 91 to 120 fruit &vegetable, 121 to 150 human made).'Activity_mean' - 1*150 matrix, the mean across all the neurons (rows of 'Activity_matrix')
to each of the 150 object examplars (1 to 30 face, 31 to 60 body, 61 to 90 hand,
91 to 120 fruit & vegetable, 121 to 150 human made).
'relative_salience' - 1*150 matrix, the relative salience of the 150
object examplars (1 to 30 face, 31 to 60 body, 61 to 90 hand,
91 to 120 fruit & vegetable, 121 to 150 human made).
Figure 2A
Inside this mat:
'individual_classificationaccuracy' - 10*81 matrix, time course of classification
accuracy for 10 different classifiers (each row represents one classifier), each column
represents the classification accuracy in each time bin (binsize 40ms, sliding in 5ms steps)
These 10 different classifiers are:
- Row 1: Face vs Body
- Row 2: Face vs Hand
- Row 3: Face vs Fruit
- Row 4: Face vs Human made
- Row 5: Body vs Fruit
- Row 6: Body vs Human made
- Row 7: Hand vs Fruit
- Row 8: Hand vs Human made
- Row 9: Body vs Hand
- Row10: Fruit vs Human made 'timebin' - center of the bins, aligned on object onset
Figure 2B
Inside this mat:
'confusion_matrix_early' - 4*4 matrix, classification accuracy confusion matrix for early window (40 to 80ms after object onset)'confusion_matrix_late' - 4*4 matrix, classification accuracy confusion matrix for late window (90 to 130ms after object onset)
Both confusion matrice have the same row and column structure, it is
listed below:
column1 column2 column3 column4row1 face vs humanmade face vs fruit face vs hand face vs body
row2 body vs humanmade body vs fruit body vs hand nan
row3 hand vs humanmade hand vs fruit nan nan
row4 fruit vs humanmade nan nan nan
Figure 2C
Inside this mat:
'facenonface_classificationaccuracy' - 1*81 matrix, time course of classificationaccuracy for the 'face vs nonface' classifier, each value represent thedata in each time bin (binsize 40ms, sliding in 5ms steps)'animateinanimate_classificationaccuracy' - 1*81 matrix, time course of classification
accuracy for the 'animate vs inanimate' classifier, each value represent the
data in each time bin (binsize 40ms, sliding in 5ms steps)
'timebin' - center of the bins, aligned on object onset
Figure 2D
Inside this mat:
Mean, upper and lower bound of 95 confidence intervals for monkey vs human face classification accuracy for early window (40 to 80ms after object onset)
'earlymonkeyhumanface_performance_mean'
'earlymonkeyhumanface_performance_lowerCI'
'earlymonkeyhumanface_performance_upperCI'
Mean, upper and lower bound of 95 confidence intervals for monkey vs human face classification accuracy for late window (90 to 130ms after object onset)
'latemonkeyhumanface_performance_mean'
'latemonkeyhumanface_performance_lowerCI'
'latemonkeyhumanface_performance_upperCI'
Mean, upper and lower bound of 95 confidence intervals for upright vs inverted face classification accuracy for early window (40 to 80ms after object onset)
'earlyUDface_performance_mean'
'earlyUDface_performance_lowerCI'
'earlyUDface_performance_upperCI'
Mean, upper and lower bound of 95 confidence intervals for upright vs inverted face classification accuracy for late window (90 to 130ms after object onset)
'lateUDface_performance_mean'
'lateUDface_performance_lowerCI'
'lateUDface_performance_upperCI'
Figure 3B
Inside this mat:
'bData' - 1*8 struct corresponding to 8 sessions of SC recording before
and after LGN inactivation
Inside 'bData':
'sessionName' - name of the recording session
'monkeyStr' - monkey subject name
'clr' - color for plot
'sacFailMap' - data for plotting saccade failure heat map figure 3B
'sacFailPercent' - data for ploTting saccade failure contra vs ipsi figure 3C
Figure 3C
Inside this mat:
'bData' - 1*8 struct corresponding to 8 sessions of SC recording before
and after LGN inactivation
Inside 'bData':
'sessionName' - name of the recording session
'monkeyStr' - monkey subject name
'clr' - color for plot
'sacFailMap' - data for plotting saccade failure heat map figure 3B
'sacFailPercent' - data for ploTting saccade failure contra vs ipsi figure 3C
Figure 3D
First, inside this mat are all results before LGN inactivation:
'foraveragecategorypsth_forplot_beforeLGNinactivation' - 1*3 cell corresponds to 3object categories: 'face', 'hand', 'human made'. In each cell, it is a 114*1001 matrix.
Each row of the matrix represents the time course of normalized firing rate for one neuron,
each column represents the normalized firing rate for a time bin (bin width 20ms, sliding step 1ms).
'p_forplot_beforeLGNinactivation' - 114*1001 matrix, each row represents the time course ofANOVA (with the factor of object category) p value for one neuron, eachcolumn represents the p value for each time bin (bin width 20ms, slidingstep 1ms).'preference_forplot_beforeLGNinactivation' - 114*1001 matrix, each row represents the object category (1,
2, 3 correspond to 'face', 'hand', 'human made') evoking the highest response for
one neuron, each column represents this object preference for each time bin (bin width
20ms, sliding step 1ms).
'psth_bin' - center of the bins, aligned on object onset, same for both
psth and ANOVA plot
Figure 3E
First, inside this mat are all results after LGN inactivation:
'foraveragecategorypsth_forplot_afterLGNinactivation' - 1*3 cell corresponds to 3object categories: 'face', 'hand', 'human made'. In each cell, it is a 114*1001 matrix.
Each row of the matrix represents the time course of normalized firing rate for one neuron,
each column represents the normalized firing rate for a time bin (bin width 20ms, sliding step 1ms).
'p_forplot_afterLGNinactivation' - 114*1001 matrix, each row represents the time course ofANOVA (with the factor of object category) p value for one neuron, eachcolumn represents the p value for each time bin (bin width 20ms, slidingstep 1ms).'preference_forplot_afterLGNinactivation' - 114*1001 matrix, each row represents the object category (1,
2, 3 correspond to 'face', 'hand', 'human made') evoking the highest response for
one neuron, each column represents this object preference for each time bin (bin width
20ms, sliding step 1ms).
'psth_bin' - center of the bins, aligned on object onset, same for both
psth and ANOVA plot
Figure 4A
Inside this mat:
'confusion_matrix_V1model' - 4*4 matrix, classification accuracy
confusion matrix for the V1-based model
Both confusion matrice have the same row and column structure, it is
listed below:
column1 column2 column3 column4row1 face vs humanmade face vs fruit face vs hand face vs body
row2 body vs humanmade body vs fruit body vs hand nan
row3 hand vs humanmade hand vs fruit nan nan
row4 fruit vs humanmade nan nan nan
Figure 4B
Inside this mat:
Classification results of 10 different classifiers using V1-based model output, the structures are all 10*1 matrix:'result_classification_V1_mean' - mean of the classification accuracy'result_classification_V1_lowerCI' - lower bound of the 95 confidence interval of the classification accuracy'result_classification_V1_upperCI' - upper bound of the 95 confidence interval of the classification accuracyClassification results of 10 different classifiers using SC early response (40 to 80ms after object onset), the structures are all 10*1 matrix:
the structures are all 10*1 matrix:
'result_classification_SCearly_mean' - mean of the classification accuracy
'result_classification_SCearly_lowerCI' - lower bound of the 95 confidence interval of the classification accuracy
'result_classification_SCearly_upperCI' - upper bound of the 95 confidence interval of the classification accuracy
These 10 classifers are listed by row:
- Row 1: Face vs Body
- Row 2: Face vs Hand
- Row 3: Face vs Fruit
- Row 4: Face vs Human made
- Row 5: Body vs Hand
- Row 6: Body vs Fruit
- Row 7: Body vs Human made
- Row 8: Hand vs Fruit
- Row 9: Hand vs Human made
- Row10: Fruit vs Human made
Figure 4C
Inside this mat:
Classification results of 10 different classifiers using V1-based model output, the structures are all 10*1 matrix:'result_classification_V1_mean' - mean of the classification accuracy'result_classification_V1_lowerCI' - lower bound of the 95 confidence interval of the classification accuracy'result_classification_V1_upperCI' - upper bound of the 95 confidence interval of the classification accuracyClassification results of 10 different classifiers using SC late response (90 to 130ms after object onset), the structures are all 10*1 matrix:
the structures are all 10*1 matrix:
'result_classification_SClate_mean' - mean of the classification accuracy
'result_classification_SClate_lowerCI' - lower bound of the 95 confidence interval of the classification accuracy
'result_classification_SClate_upperCI' - upper bound of the 95 confidence interval of the classification accuracy
These 10 classifers are listed by row:
- Row 1: Face vs Body
- Row 2: Face vs Hand
- Row 3: Face vs Fruit
- Row 4: Face vs Human made
- Row 5: Body vs Hand
- Row 6: Body vs Fruit
- Row 7: Body vs Human made
- Row 8: Hand vs Fruit
- Row 9: Hand vs Human made
- Row10: Fruit vs Human made
Usage notes
Matlab_R2022b
Funding
National Eye Institute, Award: ZIA EY000511, Intramural Research Program