Article Text

Download PDFPDF

Improving reporting and utility of evaluations of complex interventions
Free
  1. Denise Campbell-Scherer1,
  2. Richard Saitz2,3
  1. 1Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
  2. 2Section of General Internal Medicine, Boston University & Boston Medical Center, Boston, Massachusetts, USA
  3. 3Editorial Office, BMJ Group, London, UK
  1. Correspondence to Dr Denise Campbell-Scherer
    , Department of Family Medicine, University of Alberta, Edmonton, 2927-66 St Cedars Professional Park, Alberta, Canada T6K 4C1; denise.campbell-scherer{at}ualberta.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Global healthcare systems are buckling under the increasing burden of chronic disease and multimorbidity.1 Research into the efficacy of interventions to address chronic disease are needed. These potential solutions often include consideration of questions of complex, non-drug interventions such as processes of care, diet, exercise and behavioural interventions; areas with a deficit of study.2 Unfortunately, knowledge creation and dissemination is insufficient to affect behaviour, practice and policy change within diverse healthcare contexts.3 ,4 There is a need to study in situ—in multiple contexts—the impact of interventions. However, there have been concerns with the current reporting and quality of studies in these areas.2 ,5

In the design of studies on real-world interventions, researchers must move beyond unidirectional models of behaviour change and instead look to engaging evidence users, patients and clinicians, in: identifying questions and interventions of interest,6 and planning and executing the study to ensure that the implementation efforts are context appropriate.7–9 Co-creation in this way will increase the likelihood that the problems identified, and the solutions, resonate with the patients and clinicians; increasing the likelihood that there will be strong participant engagement and effort to implement and sustain the clinical change.8 ,9

From a researcher perspective there are significant challenges to undertaking work in this way. Non-medication interventions are of keen interest to patients and clinicians,6 but it is methodologically much easier to design a trial to evaluate a regulated intervention (medication, device and procedure), than it is to design and evaluate an assessment of non-regulated interventions (ie, service delivery, behavioural interventions, physical therapies).6 The significant up front engagement work of this approach is time consuming, challenging to achieve in research funding cycles, and high-risk because it requires contextual stability from real-world clinical care organisations to maintain engagement throughout the duration of the study.9 A significant advantage though, is that when successful, true engagement with the partner will allow the research to leverage existing resources of systems of care. This reduces cost of the research and increases the chance that the organisation will sustain successful changes once the research study is complete.

If we are going to address the important mismatch between what patients and clinicians need, and what clinical researchers do, achieving reduction in the associated waste,6 we need to increase understanding of the importance of an integrated knowledge translation approach among researchers and the funders of research. Peters and colleagues in the BMJ in 201310 describe an approach to the incremental work required to explore the questions, contextual realities, and design of trials to test implementation of interventions in practice; a process that does not lend itself to a single trial, rather a comprehensive programme of research.

Researcher education on the importance of: trial registration for non-medication interventions; an up to date systematic review of the topic;6 a solid theoretical underpinning for the intervention and dissemination and implementation approach;11 and, a complete description of the intervention,5 will improve quality in research design and reporting. Clear attention to all aspects of trial design that can reduce bias,6 as well as adoption of mixed methods evaluation to assess implementation process, fidelity, contextual stability and impacts of the intervention beyond the prespecified outcome measures12 will enrich the quality of the data from these trials.

It is crucial that all components of the content of complex interventions and their implementation be reported. Inadequate reporting make it impossible to adjudicate efficacy, and impossible to translate interventions into clinical practice.5 ,13 The CONSORT (CONsolidated Standards of Reporting Trials) 201014 states that “the interventions for each group should be presented with sufficient details to allow replication, including how and when they were actually administered”; however, the detail needed to replicate an intervention requires more than a short paragraph in a typical research trial report.5 Exciting new efforts to promote reporting of the content of complex interventions are underway. Paul Glasziou and Tammy Hoffman of Bond University, Australia are championing the call for improved reporting of non-drug interventions in trials. They have lead an international panel to create the TIDieR checklist and guide—a template for intervention description and replication.15 It is important that journals and funders support this work through increased expectations for reporting. They should require reporting of non-medication interventions in detail, in addition to reporting of the protocol.

It is worth reflecting on the meaning of a ‘complex intervention’ and its implications. Adapting and extending existing concepts from other disciplines,16 we can consider the three following common properties of a complex intervention:

  1. Complex collective behaviours: The collective actions of many individuals, each with their own inherent complexity and no absolute co-ordination, giving rise to changing patterns of interactions.

  2. Signalling and Information Processing: The many individuals involved with the intervention produce and use information and signals from both their internal and external environments.

  3. Adaptation: Complex interventions must adapt—in order to be successful in varied contexts, there must be clear direction on the what is to be done, with flexibility on the how the intervention is carried out. While complex interventions in the real-world can have key content and features that can be standardised, the implementation must allow for flexibility in delivery to account for the contextual reality of the setting (eg, resources, personnel and geography), with implications for trial design, reporting, and trial syntheses.

Thus one conception of a Complex Intervention is one where component individuals combine in collective action to achieve a common goal using well-defined objective components of an intervention, but do so with complex collective behaviour, sophisticated information processing, and adaptation to ensure contextual appropriateness and success. When conceived of this way it becomes apparent that no intervention involving human behaviour and interaction can be considered ‘simple’. This means that in order to understand what was happening in the intervention, detailed description must be made of the interventions’ content, context and delivery, as well as its impacts. Only then will we be able to meaningfully synthesise data from multiple trials to explore the active components of interventions that result in the desired outcomes, and to understand the unintended consequences of the intervention.

The Medical Research Council (MRC) in the UK has provided a framework for developing and evaluating complex interventions, an update and refinement of earlier work.17 This document provides a robust summary of considerations for researchers conducting studies on complex interventions. PCORI in the US has also produced methodology standards.18 Despite these efforts, there are still problems with adequate reporting of the content, context and process of interventions in real-world trials.

The combination of different trials of diverse behavioural interventions in different contexts will inherently make combination in conventional systematic review meta-analysis format difficult—which, should not be cast directly as ‘poor quality trials’. The frequent disclaimer in systematic review meta-analyses: “this review should be considered with caution since we observed statistical, clinical, methodological heterogeneity”, emphasises that this current methodology is ill-suited to syntheses of complex intervention trials. A recent example of this on chronic disease management programmes for adults with asthma,19 nicely summarised some of the components of the disparate intervention components in each study. However, it would be more useful if each study had a detailed qualitative description of the intervention content, implementation, context, and impact. Then, a high quality meta-synthesis could be done to accompany the systematic review meta-analysis, and shed light on what components of the intervention were having desired and less desired impacts.

The lack of penetrance of this MRC guidance into the evidence syntheses efforts has the potential to do harm by promoting misinterpretations of efficacy and inefficacy. An example of this is brief counselling interventions for unhealthy alcohol use. There are now two dozen systematic reviews and meta-analyses of alcohol screening and brief intervention trials.20 They find modest efficacy for reducing self-reported alcohol consumption and no consistent effects on laboratory evidence of harm, clinical outcomes or healthcare utilisation outcomes.21 The findings apply only to people who drink risky amounts and to primary care settings, but not to those with alcohol use disorders; yet in clinical practice when one screens, one cannot exclude those with disorders as is done in research. Context likely matters a great deal. Unlike a medication, which has a similar biological mechanism whether it is taken at home or elsewhere, counselling about a health behaviour may have different effects when delivered in the context of a preventive care visit by a clinician known to the patient in the context of longitudinal care, versus a clinician the patient will see once in an acute care setting in which the patient is being seen for a condition unrelated to the behaviour. Systematic reviews of alcohol screening and brief intervention in hospital and emergency settings find mixed results and do not consistently demonstrate efficacy.22 ,23

Furthermore, systematic reviews find that repeated interventions but not single ones have efficacy, little association between duration of interventions and outcome, and a great deal of heterogeneity.24 ,25 ‘Brief intervention’ includes a variety of different approaches such as advice, motivational interviewing, brief negotiated interventions, feedback, and interventions that can be described by their length (eg, 5 min, 1 h) and number of contacts (1–4) which can be in person, by telephone, or other electronic means. This variety cannot be well-sorted out in systematic reviews often because of lack of detail in original reports. Yet having such information is critical for understanding what works and what does not, and for what works better. Effect sizes vary greatly likely based on many of the aforementioned factors yet limitations in reporting lead them to all be categorised and summarised as alcohol brief interventions. For real world clinical practice, a clinician needs to know how much time to spend counselling, what the content should be, who should do it, when and where it should be done, and for whom it works. For a practice whose studies have now been summarised in numerous systematic reviews, we still do not know this information that is necessary for the practice to be useful clinically. And in the US, the practice has received large investments in the past decade for dissemination nationwide of brief one-time counselling by a clinician without a longitudinal relationship with patients, across varied clinical settings, and for drugs other than alcohol, the efficacy of which is therefore questionable at best.26

The Institute of Medicine in the USA has issued a report, “Psychosocial interventions for mental and substance use disorders.”27 In the report the serious public health implications of this problem, which affects 20% of Americans, underscore the urgency of the research agenda to study how to successfully implement, sustain and improve psychosocial interventions known to be effective into clinical practice. The only way this will be achieved is through application of the principles of the MRC framework into research practice.

Clearly, the time has come for increased attention and emphasis to the challenges to Evidence-based Medicine presented by trials of complex interventions.

References

View Abstract

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.