PT - JOURNAL ARTICLE
AU - Bolland, Mark J.
AU - Avenell, Alison
AU - Gamble, Greg D.
AU - Grey, Andrew
TI - Systematic review and statistical analysis of the integrity of 33 randomized controlled trials
AID - 10.1212/WNL.0000000000003387
DP - 2016 Nov 09
TA - Neurology
PG - 10.1212/WNL.0000000000003387
4099 - http://n.neurology.org/content/early/2016/11/09/WNL.0000000000003387.short
4100 - http://n.neurology.org/content/early/2016/11/09/WNL.0000000000003387.full
AB - Background: Statistical techniques can investigate data integrity in randomized controlled trials (RCTs). We systematically reviewed and analyzed all human RCTs undertaken by a group of researchers, about which concerns have been raised.Methods: We compared observed distributions of p values for between-groups differences in baseline variables, for standardized sample means for continuous baseline variables, and for differences in treatment group participant numbers with the expected distributions. We assessed productivity, recruitment rates, outcome data, textual consistency, and ethical oversight.Results: The researchers were remarkably productive, publishing 33 RCTs over 15 years involving large numbers of older patients with substantial comorbidity, recruited over very short periods. Treatment groups were improbably similar. The distribution of p values for differences in baseline characteristics differed markedly from the expected uniform distribution (p = 5.2 × 10−82). The distribution of standardized sample means for baseline continuous variables and the differences between participant numbers in randomized groups also differed markedly from the expected distributions (p = 4.3 × 10−4, p = 1.5 × 10−5, respectively). Outcomes were remarkably positive, with very low mortality and study withdrawals despite substantial comorbidity. There were very large reductions in hip fracture incidence, regardless of intervention (relative risk 0.22, 95% confidence interval 0.15–0.31, p < 0.0001, range of relative risk 0.10–0.33), that greatly exceed those reported in meta-analyses of other trials. There were multiple examples of inconsistencies between and within trials, errors in reported data, misleading text, duplicated data and text, and uncertainties about ethical oversight.Conclusions: A systematic approach using statistical techniques to assess randomization outcomes can evaluate data integrity, in this case suggesting these RCT results may be unreliable.