The Health Technology Assessment (HTA) Programme funds research about the clinical and cost effectiveness and broader impact of healthcare treatments and tests for those who plan, provide or receive care from NHS and social care services. HTA research is undertaken where some evidence already exists to show that a technology can be effective and this needs to be compared to the current standard intervention to see which works best. Put simply, HTA intervention study proposals state “we know it can work, is safe and that it can be done in the NHS, but we now want to know whether it works in the NHS and is cost-effective?”
Given that national pragmatic HTA trials typically cost over £1m of public funds, the HTA programme needs to be convinced that an intervention is ready for HTA researcher-led evaluation. This document outlines issues that may determine this judgement in relation to intervention studies.
Generally, an intervention is ready for HTA evaluation if:
- There is a reasonable chance that it will be effective
- It has already been tested in a typical NHS or social care setting
- There is a reasonable chance it will be used across the NHS if shown to be effective
HTA evaluation may also be appropriate if the intervention is already widely used in the NHS, but evidence of benefit and harms is lacking.
Is there a reasonable chance of the intervention being effective?
HTA research is undertaken when evidence exists to show that a technology can be effective. HTA trials usually determine effectiveness by testing for a clinically important difference in a primary outcome measure. The HTA Programme needs to be convinced that the primary outcome effect that the trial is powered to detect is plausible in the light of knowledge of how the intervention works and existing evidence.
The HTA Programme needs a clear explanation of how the intervention works and how it can produce the clinically important difference in the primary outcome. This may involve describing a theoretical framework or biological mechanism in a way that can be understood without specialist knowledge.
All HTA proposals should be preceded by a systematic review of existing trials to determine whether further research is needed. It is fine to cite published systematic reviews, but these should be supplemented by up to date searches for more recent trials. Existing evidence (including studies other than trials) should be used to show that the clinically important difference is plausible. The type of evidence presented will determine whether the case for HTA funding is convincing:
- Existing trials of effectiveness can provide good evidence of the need for an HTA trial, especially if meta-analysis of small underpowered trials suggests that the clinically important difference in primary outcome is plausible.
- Existing trials of efficacy can provide good evidence of the need for an HTA trial of effectiveness. Efficacy trials are powered to detect differences in measures of disease activity rather than the health or health service outcomes used in HTA trials of effectiveness. Efficacy trials can provide good supporting evidence if they show an effect on disease activity that could translate into the clinically important difference in primary outcome.
- Although pilot trials can provide an estimate of the anticipated effect size, the estimate is likely to be imprecise, so they only provide weak evidence of the need for an HTA trial. Furthermore, the effect size could also be biased because of the choice of sites or participants or just because in a small study greater focus of effort often brings greater success in terms of intervention effect.
- Observational studies showing an association between the intervention and the proposed primary outcome only provide weak evidence of the need for an HTA trial, regardless of the strength or statistical significance of the association. Such studies are prone to confounding by indication and are only helpful if this risk is addressed convincingly.
- Uncontrolled before versus after intervention studies are also at high risk of bias and only provide very weak evidence for the plausibility of the postulated effect. However, they can provide evidence of deliverability of the intervention in the NHS and acceptability to patients (see below).
The programme takes into account the potential difficulties in generating the evidence outlined above for some interventions and recognises that it is more difficult to provide convincing evidence of efficacy for complex interventions than interventions such as drugs. However, there still needs to be a convincing explanation for how a complex intervention can produce the postulated effect, and a signal of a plausible effect on a health outcome or strong surrogate needs to be provided.
Has the intervention been used in typical NHS or social care practice with typical NHS or social care users?
HTA trials determine whether an intervention is effective and cost-effective when delivered in typical NHS or social care settings with typical NHS or social care users. The programme therefore needs to be convinced that the intervention is ready for this type of pragmatic evaluation. This can be achieved by providing evidence from routine data, audit or small-scale evaluation showing that the intervention can be delivered with reasonable fidelity by typical practitioners and is acceptable to typical patients or users.
Evaluation of fidelity, adherence and acceptability may form part of the proposed HTA trial, so we do not expect these to be definitively demonstrated before an application is submitted, but there should be a reasonable expectation that they will be achieved. The evidence required will depend on the nature of the intervention. A complex intervention or one involving a high level of professional skill or patient involvement will require more evidence than a simple intervention. An intervention that has only ever been used in a very specialist setting or never used in the NHS is unlikely to be considered ready for HTA evaluation.
Is there a reasonable chance that the intervention (if effective) will be delivered across the NHS or social care?
There are a number of factors, other than effectiveness, that may determine whether an intervention is implemented across the NHS or social care. These include:
- Cost-effectiveness: An expensive intervention may not be cost-effective, according to thresholds for willingness to pay for health gain used by the National Institute for Health and Care Excellence (NICE), even if it is highly effective. The programme will need to be convinced that this issue has been considered for proposals to evaluate expensive interventions.
- Lack of staff or facilities: It is usually assumed that health or social care resources will be found to support implementation if an intervention is cost-effective. However, there may be other factors that prevent implementation, such as a lack of specialist staff or facilities across the UK. The programme will need to be convinced that interventions requiring specialist staff and facilities can be delivered across health or social care, if shown to be effective and cost-effective.
The intervention is already widely used in the NHS, but evidence of benefit and harms is lacking
The programme also takes into account current use of the intervention in the NHS or social care. If the intervention is already widely used despite a lack of evidence of effectiveness, HTA evaluation offers potential to reduce unnecessary and potentially harmful intervention. In these circumstances, applicants need to convince the programme that decision-making will be responsive to robust evidence and result in major changes to practice i.e. to stop doing ineffective things, rather than providing evidence to support a worthwhile effect.
A more complicated scenario involves a proposal intended to show that a cheaper intervention e.g. less intensive physiotherapy or CBT, or one with fewer side effects, has comparable effectiveness to current treatment. In this situation, the programme will need to be convinced that the new intervention has the potential to achieve comparable effectiveness, drawing upon a convincing explanation of how it can deliver comparable effectiveness at lower cost or risk, and evidence of efficacy.
We thank Professor Sandra Eldridge for constructive comments