Deep learning models challenge the prevailing assumption that face-like effects for objects of expertise support domain-general mechanisms
Data files
Apr 20, 2023 version files 8.70 GB
-
1000_faces.csv
-
1000_faces.pth
-
1000_inanimates.csv
-
1000_objects.pth
-
260_faces.csv
-
260_faces.pth
-
260_inanimates.csv
-
260_inanimates.pth
-
260_species.csv
-
260_species.pth
-
30_faces.csv
-
30_faces.pth
-
30_inanimates.csv
-
30_inanimates.pth
-
30_soc_weav.csv
-
30_soc_weav.pth
-
30_species.csv
-
30_species.pth
-
README.md
-
verification_tests.zip
Abstract
The question of whether perceptual expertise is mediated by general-expert or domain-specific processing mechanisms has been debated for decades. Because humans are experts in face recognition, face-like neural and cognitive effects for objects of expertise were considered to support for the general-expertise hypothesis. Conversely, stronger effects for faces than objects of expertise were considered to support the domain-specific hypothesis. However, the effects of domain, experience, and level of categorization, are confounded in human studies, which may lead to erroneous inferences. To overcome these limitations, we used computational models of perceptual expertise and tested different domains (objects, faces, birds) and levels of categorization (basic, sub-ordinate, individual) in isolation, matched for amount of experience. Like humans, the models generated a larger inversion effect for faces than for objects. Importantly, a face-like inversion effect was found for individual-based categorization of non-faces (birds) but only in a network specialized for that domain. Thus, contrary to prevalent assumptions, face-like effects in objects of expertise may originate from domain-specific rather than domain-general processing mechanisms. More generally, we show how deep learning algorithms can be used to isolate the effects of factors that are inherently confounded in the natural environment of biological organisms.
Methods
Creation of verification tests for upright and inverted images is described in the article.
Each verification test is composed of 2 txt files:
- Containing pairs of images belonging to the same class (both either upright or inverted, labeled "same")
- Containing pairs of images belonging to different classes (both either upright or inverted, labeled "diff")
To calculate the AUROC, calculate the distances between the image pairs in the corresponding "same" and "diff" files.
Corresponding txt files are found in the same directory, with the prefix "same" or "diff" and the suffix "_{id}.txt". For example, corresponding txt files used for the verification tests are "same_3.txt" and "diff_3.txt".
Distance between image representations according to the different models could also be found in appropriately named csv files (for example, "260_inanimate.csv" representations from the network trained on 260 classes from ImageNet).
Trained neural networks are available as PyTorch weights, and trained according to the procedure described in the paper.
Usage notes
Model weights are given as PyTorch state_dict.