Skip to main content
Dryad

Data from: Comparing traditional and Bayesian approaches to ecological meta-analysis

Cite this dataset

Pappalardo, Paula et al. (2020). Data from: Comparing traditional and Bayesian approaches to ecological meta-analysis [Dataset]. Dryad. https://doi.org/10.5061/dryad.zw3r22863

Abstract

1. Despite the wide application of meta-analysis in ecology, some of the traditional methods used for meta-analysis may not perform well given the type of data characteristic of ecological meta-analyses.

2. We reviewed published meta-analyses on the ecological impacts of global climate change, evaluating the number of replicates used in the primary studies (ni) and the number of studies or records (k) that were aggregated to calculate a mean effect size. We used the results of the review in a simulation experiment to assess the performance of conventional frequentist and Bayesian meta-analysis methods for estimating a mean effect size and its uncertainty interval.

3. Our literature review showed that ni and k were highly variable, distributions were right-skewed, and were generally small (median ni =5, median k=44). Our simulations show that the choice of method for calculating uncertainty intervals was critical for obtaining appropriate coverage (close to the nominal value of 0.95). When k was low (<40), 95% coverage was achieved by a confidence interval based on the t-distribution that uses an adjusted standard error (the Hartung-Knapp-Sidik-Jonkman, HKSJ), or by a Bayesian credible interval, whereas bootstrap or z-distribution confidence intervals had lower coverage. Despite the importance of the method to calculate the uncertainty interval, 39% of the meta-analyses reviewed did not report the method used, and of the 61% that did, 94% used a potentially problematic method, which may be a consequence of software defaults.

4. In general, for a simple random-effects meta-analysis, the performance of the best frequentist and Bayesian methods were similar for the same combinations of factors (k and mean replication), though the Bayesian approaches had higher than nominal (>95%) coverage for the mean effect when k was very low (k<15). Our literature review suggests that many meta-analyses that used z-distribution or bootstrapping confidence intervals may have over-estimated the statistical significance of their results when the number of studies was low; more appropriate methods need to be adopted in ecological meta-analyses.

Methods

This dataset includes two sets of data:

1) Results from a literature review on climate change meta-analysis (file Pappalardo_etal_LiteratureReview_Dataset.xlsx):

We searched the ISI Web of Science database for published meta-analysis on the ecological impacts of global climate change. The search string for TOPIC included ([“meta-analy*” OR “metaanaly*” OR “meta analy*”] AND [“climate change” OR “global change”]). We only included articles and reviews within the “Ecology”, “Environmental Sciences”, “Biodiversity Conservation” and “Plant Sciences” categories. A detailed explanation of the literature search, abstract screening, and inclusion criteria are provided in the main paper and the Supporting Information. The full list of papers and the information collected from each meta-analysis is provided as an excel file, which includes a metadata section explaining all the columns in each data sheet. We only consider papers between 2013 and 2016 for the final analysis. We provide the R Code used to compile the search files from Web of Science and to conduct the abstract screening in the file "Pappalardo_etal_R_Code.rmd" and we also provide the original data downloaded from the Web of Science as text files.

2) Results from simulated experiments comparing the performance of different uncertainty intervals on the estimation of an overall effect size:

We used the results of the literature search to inform our simulations, and simulated data typical of an ecological meta-analysis. Each simulated dataset was analyzed using a simple random-effects meta analytic model and different methods to calculate an uncertainty interval for an overall or mean effect (3 frequentists and 1 Bayesian approach). For the Bayesian approach, we also explored different priors for the among-study variance. We compiled the results of each meta-analysis in different .CSV files and provide the summary files for the different methods and for the explorations using different priors. The files related to the simulation experiment are: summary_bayesian.csv, summary_metafor.csv, summary_metaforboot.csv, summary_metaforboot_tau.csv, and summary_priors.csv. We provide the R code used to simulate datasets, conduct the meta-analysis for each method, and to summarize the data from the 2000 replicated meta-analysis. We included metadata for the simulation experiment files in the file Pappalardo_etal_Metadata_SimulationExperiment.xlxs. The functions used for the simulations are provided as an R file "Functions_Pappalardo_etal.R" and are also included in the rmarkdown file "Pappalardo_etal_R_Code.rmd".

 

Usage notes

The file with the results from the literature review  (Pappalardo_etal_LiteratureReview_Dataset.xlsx) includes a "Readme" and a "Metadata" section. The file Pappalardo_etal_Metadata_SimulationExperiment.xlxs includes the metadata for the simulation experiments files (summary_bayesian.csv, summary_metafor.csv, summary_metaforboot.csv, summary_metaforboot_tau.csv, and summary_priors.csv). 

Missing values are NA in the .csv files from the simulation experiments and can be empty cells or NA in the literature review dataset.

 

Funding

National Science Foundation, Award: DEB-1655426 and DEB-1655394

United States Department of Energy, Award: DE-SC-0010632