Internet Explorer is no longer supported by Microsoft. To browse the NIHR site please use a modern, secure browser like Google Chrome, Mozilla Firefox, or Microsoft Edge.

Test, learn, adapt - great ideas meet messy realities

Published: 17 September 2019

There is a tension in applied health research. Randomised trials are a great way to test new services, treatments and policies, but are typically done in modest and fairly well-controlled environments. By contrast, the royal road to impact is to intervene ‘at scale’, delivering a policy or treatment at a regional or national level – but where randomisation and control may be difficult, if not impossible.

In 2012, Haynes and colleagues published ‘Test, Learn, Adapt’, which called for greater use of randomised controlled trials within policy to better test the effectiveness and cost-effectiveness of policy decisions. Even when the overall policy is not amenable to randomisation, particular aspects of how it is done or delivered can be. These studies are called ‘embedded’ trials. Large scale implementation programmes are a potentially great platform for testing relevant questions through embedded trials to inform current and future implementation, and build an empirical evidence base for how such things are best delivered.

Our team (DIPLOMA) is undertaking an NIHR-funded evaluation of the NHS Diabetes Prevention Programme (NHS DPP), and we were keen from the outset to use the platform of the diabetes prevention programme to evaluate aspects of its delivery through embedded trials.

Over the past two years NHS England and DIPLOMA researchers have discussed three important topics for embedded trials, each of which has failed to take off:

  1. The effect on recruitment and retention of a personalised referral conversation in the general practice, where the GP or practice nurse would explain what it means to be at risk of diabetes, provide information about what would happen in the DPP and discuss a referral.
  2. Different approaches to retention: how best to encourage people to complete the programme and not drop out early before completion of significant portions of the NHS DPP course.
  3. Whether different approaches to providing physical activity within NHS DPP sessions improve the outcomes for patients.

There was real enthusiasm for all three ideas among the research team, the NHS DPP team (who already have a national implementation to contend with), and even the sites hard-pressed to deliver the programme, and no lack of goodwill. But each time a variety of factors got in the way. Eventually, we all chose to focus on other parts of the evaluation.

So what went wrong and how can researchers and policy-makers learn from our experience of (not doing) embedded trials?

Eight lessons from our experience:

  1. Things move quickly in a massive programme like the NHS DPP. This does not align with research timetables, where modelling of interventions, study design and funding are required to maximise research quality, and projects can be slow to start.
  2. Priorities change in response to emerging data or political priorities.
  3. Experiments mean additional work for the public sector, beyond their core focus of providing services for those in need.  These experiments are also hard work for researchers: they are messy, time-consuming and they don’t fit neatly into a box of funding application or journal article.
  4. Many people need to be persuaded. Public sector organisations are complex. Researchers often spend ages negotiating experiments with one or two people, only for those people to then feel the need to involve others in order to get senior buy-in.
  5. Pinning down the intervention is very hard. There must be a shared understanding of how an intervention will be designed. Will it be a real-world intervention that the policy-maker thinks has promise or will the idea come from academic evidence?
  6. Public sector ethos is to help the neediest, and in that context randomisation can seem unfair, as the intervention won’t necessarily go to those in need.
  7. Policy-makers are under pressure to get things “Right first time” and not make mistakes. This may make policy-makers averse to experiments, which could show their policy to be ineffective. On the other hand experiments offer a space to innovate, without the risks seen in the usual roll-out process.
  8. Designing embedded trials in advance is not the answer. These could be included by researchers as part of a funding bid, but that would remove the flexibility to run with real-world ideas, and limit the input from policy makers.

Our proposal to encourage more embedded policy trials

Our proposal for change that could address many of these concerns would be to create the time and space for policy-makers and researchers to develop policy trials together, perhaps by NIHR supporting a standing public policy experiments programme (as the NIHR HTA has done for embedded studies in their trials). 

About the authors:

Peter Bower is Professor of Health Services Research and Lead at the Centre for Primary Care at The University of Manchester and Co-Principal Investigator for the DIPLOMA Evaluation Programme. Sarah Cotterill is a Senior Lecturer in the Centre for Biostatistics at The University of Manchester, an academic within NIHR CLAHRC Greater Manchester and a Co-Investigator of the DIPLOMA Evaluation Programme.

Contact: peter.bower@manchester.ac.uk or sarah.cotterill@manchester.ac.uk

More information on the DIPLOMA study is available on the NIHR Journals Library website.

Peter Bower and Sarah Cotterill's blog is also published on the NIHR CLAHRC Greater Manchester website.

NIHR blog