Skip to main content
Dryad

Data from: Flexible methods for species distribution modeling with small samples

Data files

Dec 16, 2025 version files 259.40 MB

Click names to download individual files

Abstract

Species distribution models (SDMs) predict where species live or could potentially live and are a key resource for ecological research and conservation decision-making. However, current SDM methods often perform poorly for rare or inadequately sampled species, which include most species on earth, as well as most of those of the greatest conservation concern. Here, we evaluated the performance of three modeling approaches designed for data-deficient situations: plug-and-play modeling, density-ratio modeling, and environmental-range modeling. We compared the performance of algorithms within these approaches with the maximum entropy (MaxEnt) model, a widely used density-ratio algorithm, both for data-poor species and more generally. We also tested to what extent model cross-validation performance on training data predicts model performance on independent, presence-absence data. We found that no algorithm performed best in all situations. Across all species, MaxEnt performed best on average but was outperformed by one or more of the plug-and-play, density-ratio, or environmental-range algorithms in 72 % of cases. Six of the other algorithms had the area under the receiver operating characteristic curve (AUC) distributions not significantly different from MaxEnt’s, and for data-poor species (those with 20 or fewer occurrences), 24 of the algorithms considered had AUC distributions not significantly different from MaxEnt’s. However, we found that the algorithm outputs (when thresholded to predict presence vs absence) spanned a wide sensitivity-specificity gradient. Specificity and prediction accuracy assessed on training data were strongly correlated with specificity and prediction accuracy assessed on independent presence-absence data. However, AUC and sensitivity were weakly correlated in training vs testing sets, with only 22 % of species having the same model perform best when evaluated on training and independent, presence absence data. Finally, we show how ensembles of algorithms that span the sensitivity-specificity gradient can represent model disagreement in poorly sampled species and improve model predictions.