Article Text

Download PDFPDF

General medicine
Catalogue of bias: publication bias
Free
  1. Nicholas J DeVito,
  2. Ben Goldacre
  1. Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
  1. Correspondence to Dr Ben Goldacre, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; ben.goldacre{at}phc.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

Dickersin and Min define publication bias as the failure to publish the results of a study ‘on the basis of the direction or strength of the study findings’.1 This non-publication introduces a bias which impacts the ability to accurately synthesise and describe the evidence in a given area.2 Publication bias is a type of reporting bias and closely related to dissemination bias, although dissemination bias generally applies to all forms of results dissemination, not simply journal publications. A variety of distinct biases are often grouped into the overall definition of publication bias.3 4

There are a number of risk factors and causes for publication bias identified in the literature.5 Research has shown causes of publication bias ranging from trialist motivation, past experience, and competing commitments; perceived or real lack of interest in results from editors, reviewers, or other colleagues; or conflicts of interest that would lead to the suppression of results not aligned with a specific agenda.3 6–9 The role of journal editors is particularly complex as the gatekeepers to publication. Significant results are more widely cited in medicine aligning the incentives of both investigators and editors towards these studies.10 A review by Song and colleagues reports studies showing that strength and direction of study results do not impact the acceptance rates of submitted manuscripts; however, this research may not account for researchers selectively withholding poorly conducted or presented research with non-significant findings.3 Whether there is an editorial bias or not, the persistence of this belief among investigators may also impact which studies are submitted for publication.

Examples

In his 1986 piece on publication bias in clinical research, R.J. Simes compared data reported to a cancer trial registry with data from the published literature on the survival impact of two cancer therapies. Simes found that in both instances, the survival impact of the therapies either disappeared or was substantially less when the subset of data published in the academic literature was compared against the more complete data from a registry.11

Publication bias is commonly assessed in cohort studies, such as the Simes example, where publication status is ascertained for a group of known completed trials. Research into treatments for depression provides a more recent example. Turner and colleagues reported that 31% of a cohort of studies for antidepressant drugs registered and reported to the Food and Drug Administration (FDA) were never published. The literature included 91% positive studies while the larger FDA cohort only contained 51% positive studies.12 Driessen and colleagues reviewed all NIH grants for psychological treatments for depression from 1972 to 2008. When publications were not found, the data were requested from the grant recipients. Thirteen out of 55 trials (23.6%) arising from this cohort were never published. The effect size of psychological treatments was reduced by 25% when unpublished data were included in the pooled analysis with the published data.7

Results of cohort studies such as these have been collected in systematic reviews. A 2013 systematic review by Dwan and colleagues reviewed 20 cohort studies on publication bias in randomised controlled trials and showed ‘statistically significant outcomes had a higher odds of being fully reported compared with non-significant outcomes (range of OR: 2.2 to 4.7)’.13 A 2014 systematic review by Schmucker and colleagues examined studies of publication and dissemination bias conducted using research approved by ethics committees or registered on a trial registry. They found that across 23 cohort studies, ‘statistically significant results were more likely to be published than those without (pooled OR 2.8; 95% CI 2.2 to 3.5)’.14

Impact

The above examples help illustrate the impact of publication bias. This can vary from the non-publication of a single notable study to compromising the complete assessment of a therapeutic area. However, as with many biases, large-scale quantitative research has tended to focus on documenting the prevalence of publication bias, rather than its impact, and assessing the direction and magnitude of bias can be difficult.15 Schmucker and colleagues conducted a systematic review examining studies on publication bias that additionally estimated the impact of unpublished studies on pooled effects. They found seven studiescomparing pooled treatment effect estimates according to different publication status; two of these showed a statistically significant effect of unpublished or grey literature data on the pooled estimates.16

Preventive steps

Prevention of publication bias can take many forms. Certain journals, such as Trials, have made the solicitation and publication of null results a part of their core mission.17 However, previously documented barriers to publication, such as time and investigator interest,3 cannot be addressed by the presence of journals receptive to null results.

The preceding decade has seen various initiatives in the US and EU requiring certain trials to report results directly onto clinical trial registries in structured data format within 12 months of completion, providing an additional avenue for dissemination outside of academic journals. Sadly, there is growing evidence that these laws and guidelines are undermined by loopholes and poor compliance.18–21

Authors of systematic reviews and meta-analyses can also take steps to reduce the impact of non-publication on their work. The search for evidence should not be limited to only journal articles indexed in repositories such as PubMed or Ovid. Authors can and should search for results through other routes including trial registries, regulatory documents, and contacting trialists of known or suspected unpublished work.22 23 They can also use statistical methods to estimate if their sample of studies is likely impacted by publication bias. Funnel plots are a common way to visualise a skew in the publication of findings. While useful, their interpretation must be carefully considered based on the methods used to construct the plot.24–26 More rigorous statistical methods for assessing publication bias exist and should be considered for use in meta-research.23 27 28 While there is evidence that asymmetry tests for publication bias are underutilised, there is also evidence suggesting that they are not applicable to many meta-analyses.29 30

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.

Footnotes

  • Contributors All authors contributed equally to manuscript drafting and revision.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.