Skip to main content
Dryad

Data from: Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy

Cite this dataset

Mayo-Wilson, Evan et al. (2018). Data from: Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy [Dataset]. Dryad. https://doi.org/10.5061/dryad.mp26fb1

Abstract

PLEASE NOTE, THESE DATA ARE ALSO REFERRED TO IN SUBSEQUENT PUBLICATIONS. PLEASE SEE http://dx.doi.org/10.1016/j.jclinepi.2017.05.007 FOR MORE INFORMATION. Objectives The objective of this study was to determine whether disagreements among multiple data sources affect systematic reviews of randomized clinical trials (RCTs). Study Design and Setting Eligible RCTs examined gabapentin for neuropathic pain and quetiapine for bipolar depression, reported in public (e.g., journal articles) and nonpublic sources (clinical study reports [CSRs] and individual participant data [IPD]). Results We found 21 gabapentin RCTs (74 reports, 6 IPD) and 7 quetiapine RCTs (50 reports, 1 IPD); most were reported in journal articles (18/21 [86%] and 6/7 [86%], respectively). When available, CSRs contained the most trial design and risk of bias information. CSRs and IPD contained the most results. For the outcome domains “pain intensity” (gabapentin) and “depression” (quetiapine), we found single trials with 68 and 98 different meta-analyzable results, respectively; by purposefully selecting one meta-analyzable result for each RCT, we could change the overall result for pain intensity from effective (standardized mean difference [SMD] = −0.45; 95% confidence interval [CI]: −0.63 to −0.27) to ineffective (SMD = −0.06; 95% CI: −0.24 to 0.12). We could change the effect for depression from a medium effect (SMD = −0.55; 95% CI: −0.85 to −0.25) to a small effect (SMD = −0.26; 95% CI: −0.41 to −0.1). Conclusions Disagreements across data sources affect the effect size, statistical significance, and interpretation of trials and meta-analyses.

Usage notes