Data from: Crowds replicate performance of scientific experts scoring phylogenetic matrices of phenotypes
Data files
May 26, 2017 version files 1.71 MB
-
Appendix 1 Run3onlyVer5.xlsx
46.74 KB
-
Appendix 10 Diatoms-user-results.csv
10.68 KB
-
Appendix 11 Lilies-character-taxon-results.xlsx
35.98 KB
-
Appendix 12 Lilies-user-results.csv
8.88 KB
-
Appendix 13 Marine-Shrimp-character-taxon-results.xlsx
34.91 KB
-
Appendix 14 Marine-Shrimp-user-results.csv
9.01 KB
-
Appendix 15 all-users-results.xlsx
75.39 KB
-
Appendix 16 joint-predicted-difficulty.xlsx
15.81 KB
-
Appendix 17 Diatoms-user-thresh-curve.xlsx
19.48 KB
-
Appendix 18 Ver2 final-results-summary.xlsx
17.45 KB
-
Appendix 19 Data and R Scripts.zip
450.77 KB
-
Appendix 2 characterDifficultyMP_3-15-16.xlsx
43.75 KB
-
Appendix 20 Instructions to Crowd.pdf
757.95 KB
-
Appendix 3 Anemones-character-taxon-results.xlsx
36.23 KB
-
Appendix 4 Anemones-user-results.csv
9.32 KB
-
Appendix 5 Bats-character-taxon-results.xlsx
38.84 KB
-
Appendix 6 Bats-users-results.csv
7.40 KB
-
Appendix 7 Catfish-character-taxon-results.xlsx
37.31 KB
-
Appendix 8 Catfish-user-results.csv
10.34 KB
-
Appendix 9 Diatoms-character-taxon-results.xlsx
39.80 KB
May 26, 2017 version files 3.20 MB
-
Appendix 1 Run3onlyVer5.xlsx
46.74 KB
-
Appendix 10 Diatoms-user-results.csv
10.68 KB
-
Appendix 11 Lilies-character-taxon-results.xlsx
35.98 KB
-
Appendix 12 Lilies-user-results.csv
8.88 KB
-
Appendix 13 Marine-Shrimp-character-taxon-results.xlsx
34.91 KB
-
Appendix 14 Marine-Shrimp-user-results.csv
9.01 KB
-
Appendix 15 all-users-results.xlsx
75.39 KB
-
Appendix 16 joint-predicted-difficulty.xlsx
15.81 KB
-
Appendix 17 Diatoms-user-thresh-curve.xlsx
19.48 KB
-
Appendix 18 final-results-summary_MAH edited.xlsx
17.21 KB
-
Appendix 18 Ver2 final-results-summary.xlsx
17.45 KB
-
Appendix 19 Data and R Scripts_MAH modified.zip
518.20 KB
-
Appendix 19 Data and R Scripts.zip
450.77 KB
-
Appendix 2 characterDifficultyMP_3-15-16_MAH edited.xlsx
41.17 KB
-
Appendix 20 Instructions to Crowd.pdf
757.95 KB
-
Appendix 3 Anemones-character-taxon-results.xlsx
36.23 KB
-
Appendix 4 Anemones-user-results.csv
9.32 KB
-
Appendix 5 Bats-character-taxon-results.xlsx
38.84 KB
-
Appendix 6 Bats-users-results.csv
7.40 KB
-
Appendix 7 Catfish-character-taxon-results.xlsx
37.31 KB
-
Appendix 8 Catfish-user-results.csv
10.34 KB
-
Appendix 9 Diatoms-character-taxon-results.xlsx
39.80 KB
Abstract
Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of the non-expert students, or “citizen scientists.” Crowdsourcing, or data collection by non-experts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by non-expert crowds is, however, often questioned and little data has been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 non-experts, and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared to scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a non-expert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and non-experts. We show that there are important instances in which crowd consensus is not a good proxy for correctness.