Island size predicts mammal diversity in insular environments, except for land-bridge islands
de Souza Ferreira Neto, Gilson (2022), Island size predicts mammal diversity in insular environments, except for land-bridge islands , Dryad, Dataset, https://doi.org/10.5061/dryad.1zcrjdfvn
Insular environments are among the most endangered ecosystems as they face a myriad of anthropogenic stressors. Forest mammals perform a wide range of ecological services, with their persistence being vital for ecosystem functionality in both natural and artificial islands. Studies revealed that shrinkage in island size usually leads to the decay of mammal species richness and abundance in patchy landscapes. However, mammal species-area (SARs) and abundance-area (AARs) relationships can differ among insular environments: oceanic, fluvial, artificial, and land-bridge islands (i.e., natural islands connected to the mainland). Large dams create vast insularized landscapes after river impoundment, leading to pervasive habitat loss and potentially causing even worse biodiversity losses than other insular systems. We conducted an extensive literature search and used meta-analysis techniques to quantify the magnitude of SAR and AAR for forest mammals across different archipelago landscapes worldwide. After a screening process, we ended up with 26 studies comprising 55 different effect sizes representing the magnitude of SARs and AARs. Our global analysis unveiled a positive relationship between effect sizes and island area, with mammal species richness and abundance increasing in fluvial, oceanic, and artificial islands accordingly with island area, but not in land-bridge islands. These results demonstrate that, except for land-bridge islands, SAR and AAR are still fair models to predict mammal diversity. These results could improve the prediction of SAR and AAR in insular environments under habitat loss scenarios and propose sound conservation strategies since the rate at which insular communities have been lost is presently unknown.
2. MATERIAL AND METHODS
2.1 Search and studies selection
This study was carried out following the recommendations of the PRISMA initiative (Preferred Reporting Items for Systematic Reviews and Meta-Analysis; Moher et al., 2009). The literature search was concluded on April 2021, using the ISI Web of Science™ database (WoS) and Scopus. The following keywords were used: ("marine island*" OR "oceanic island*" OR "reservoir island*" OR "artificial island*" OR "forest island*" OR "land-bridge island*" OR "fluvial island*" OR "river* island*" OR "wetland*" OR "floodplain forest*") AND ("mammal*" OR "vertebrate*"). This set of keywords was applied to the "Topic" search with no filters. A total of 2190 and 1860 studies were found in WoS and Scopus, respectively. However, 2136 studies were excluded from this review because they were out of the scope (they did not evaluate mammals on islands). We focused on non-flying mammals, given that bats have a higher ability to disperse than arboreal and terrestrial forest mammals.
After a careful assessment using our inclusion and exclusion criteria (see the topic' Data extraction' for more details), only 6% (number of studies, k = 129) of the papers presented adequacy to our eligibility criteria. We also included one study that we were aware of (Dalecky et al., 2002), which was not included in our data search. These studies must present the SAR, the AAR, or provided island area together with the number of species or abundance per surveyed island, which were subsequently used to calculate the effect sizes. At least three sampling sites per study were required to be included in our database, and all sampling techniques were considered. Finally, as this study was interested in native faunal assemblages, exotic species were excluded.
Seventy-nine studies were excluded because they had no quantitative information relevant for syntheses, such as sample size, average, standard deviation, or correlation coefficients (Figure 1, Supporting information). Studies with a sample size lower than three were also excluded. After our assessment, we included 26 studies that tested the effects of island areas on mammal species richness or abundance belonging to 22 studied insular landscapes in our meta-analysis.
FIGURE 1. Flow diagram summarizing the procedure used to select studies for the systematic review.
2.2 Data extraction
For each study that achieved the eligibility criteria, we recorded the sample sizes and correlation coefficients (Pearson (r) or Spearman coefficients (ρ)) or the coefficient of determination (r²) for simple regression between island area and either species richness or abundance. Neither correlation measure, r², mean, or standard deviation, was provided in the two studies. For these two studies, we thus extracted data from figures to calculate effect sizes using the "metaDigitise" package (Pick et al., 2018). We considered oceanic islands, all islands that do not sit on continental shelves surrounded by sea. Fluvial islands encompassed those islands surrounded by a river, transitory islands, and islands disconnected from the mainland (Kalliola, 1993). Finally, land-bridge islands were considered those temporally connected to the mainland (Newmark, 1987), while artificial islands were those anthropogenically created by dams. We also identified the distribution of the studies per biogeographical region (Afrotropical, Australasia, Indomalaya, Nearctic, Neotropic, and Palearctic).
2.3 Data analysis
We transformed Spearman correlations (ρ) to the Pearson product-moment correlation coefficient (r) following Lajeunesse (2013): r = 2*sin(π ρ/6), if n < 90 or r = ρ, if n ≥ 90 where n represents the sample size. All the studies included (26) presented a coefficient of determination (r²) of a simple regression between island area and mammal richness and abundance. In these cases, we estimated r as the square root of r². Next, we transformed all r values to Fisher's z (and their respective variances) following Borenstein et al. (2009).
We estimated the weighted effect size using a random-effect model, in which we assume that all studies do not share a common true effect size but a true mean effect size with a given true variance between-study (T²) (Borenstein et al., 2009). We estimated T² by restricted maximum likelihood. We weighted each effect size by the inverse of the total variance. We quantified the heterogeneity in studies' outcomes (effect sizes summarizing SAR and AAR) with the T² and I² statistics. T² was decomposed into T² between and T² study-level. The T² statistic is the between-study variance, a heterogeneity measure in the same effect size scale. The I² statistic measures how much of the heterogeneity is true variability (i.e., is not due to experimental error) and is measured on a relative scale (Borenstein et al., 2009; 2017). We also reported Cochrane's Q, which is the weighted sum of squared differences between individual study effects and the pooled effect across studies (Koricheva et al., 2013).
We assessed the effect of island environments (levels: artificial islands, land-bridge islands, oceanic islands, or fluvial islands) on Fisher's z to quantify how much heterogeneity (variability in studies outcomes) could be explained by these archipelagic characteristics. The effects of these factors (or moderators in the meta-analysis jargon) were evaluated by subgroup analyses (Borenstein et al., 2009; 2017), which were performed separately for each moderator. We only tested levels of each moderator that had at least four studies to perform this analysis using the "metafor" package (Viechtbauer, 2010). In most cases, these studies were done at different locations to avoid a small sample size for parameter estimates. The effect of each moderator was assessed by partitioning Cochrane's Q in a heterogeneity explained by the moderator variable (QM) and a residual heterogeneity (QR) (analogous to an analysis of variance) (Borenstein et al., 2009). QM follows a χ² distribution and, if significant, means that there was a difference between the mean effect size of at least one of the levels of the moderator considered (in other words, the moderator explains part of the heterogeneity). We also tested the sub-group models without intercepts to test if the average effect size of each level of the moderator differs from zero. We estimated several effect size measures for a single study in some cases. In these cases, effect sizes cannot be assumed as independent information since they were estimated with the same sampling units (a multiplicity artifact; Hedges et al., 2010; López-López et al., 2018). Thus, to control for effect size multiplicity, the cumulative effect size and subgroup analyses consisted of multi-level meta-analyses (Nakagawa & Santos, 2012). In this analysis, we control for multiplicity by adding a random-effects term encoding the study corresponding to the group of dependent effects sizes (T²study-level) (Nakagawa & Santos, 2012). The effects of spatial extent were evaluated by a meta-regression based on log-transformed data to improve linearity (Borenstein et al., 2009).
In order to assess for publication bias, we used a funnel plot to identify asymmetry in the publications on SARs and AARs. The funnel plot is a scatterplot of effect sizes against a measure of their precision and tends to be symmetric around the mean effect size in the absence of bias (Borenstein et al., 2009). We used the trim-and-fill method to (i) estimate the potential number of missing studies and (ii) correct the cumulated effect size accounting for the potential omission of studies due to publication bias (Jennions et al., 2013). Finally, we also used Orwin's fail-safe number (OFSN; Orwin 1983) to assess publication bias. The OFSN estimates how many studies we would need to include to reduce the unweighted average effect size to the desired threshold we deemed as non-relevant (Borenstein et al., 2009; 2017). We choose non-relevant reductions of 25% in the mean effect size following Ferreira Neto et al. 2022. For publication bias analyzes, we computed mean effect sizes and variances by the study to avoid bias due to multiplicity. All analyzes were performed using the "metafor" package (Viechtbauer, 2010) in the "R" software (R Development Core Team, 2020).