Internet Explorer is no longer supported by Microsoft. To browse the NIHR site please use a modern, secure browser like Google Chrome, Mozilla Firefox, or Microsoft Edge.

Guidance for completing a review task (professional contributors)

Contents

Published: 26 June 2019

Version: 2.0 Aug 22

Print this document

Before you complete your review

Potential Competing Interests

Before starting the review, please check the list of all applicants and co-applicants in all of the applications you are being asked to review, and if you feel that a relationship with an organisation or individual could be perceived as influencing your review then please contact the relevant programme for further advice.

We ask that you do this well in advance of the deadline to give us the most possible time to find an alternative reviewer should you not be able to complete the review.

If you have a competing interest which we do not deem to be significant you may be asked to complete the review. In this instance, please provide details of the potential competing interest in the box provided in the reviewer form. We will not reject your opinion simply because you declare a competing interest, but we would like to know about it.

Deadline

As soon as you receive the review task, please ensure that you are clear on the deadline for submitting your review, and that you are able to complete and submit your review within this timeframe. As we usually invite you to review well in advance of the date you receive the task, we understand that circumstances can change and that you may no longer be able to commit to the required time.

If it is no longer possible for you to complete the review by the given deadline, please contact the relevant programme as soon as possible.

It may be possible to grant a short extension to the deadline, however, in line with our commitment to keeping the time from application to funding as short as possible, all of our programmes work to extremely tight deadlines and there may be no time available for late reviews to be used as part of the application process. To avoid the possibility of your efforts being wasted, please ensure that you are able to meet the deadline or have agreed an extension before beginning your review.

Confidentiality

Once you agree to undertake a review the applications are shared with you and you will need to confirm that you will not disclose to any other person the fact that the applicant has applied for a research award, nor will you disclose the content of the application to any other person (including work colleagues) or use the information for any purpose other than providing a review of it to NETSCC. If you wish to seek input from a colleague then you need to obtain permission from programme staff before sharing any details of the application or applicants. When completing your review, you should not reference other applications, or disclose any contents contained therein where a decision on the funding of a study has not been made publicly available. Please do not make any comparisons with other applications you may be reviewing as all your comments (anonymised) will be seen by the applicants.

Your completed responses are considered confidential by the NIHR Evaluation, Trials and Studies Coordinating Centre (NETSCC). Your anonymised comments will be passed to applicants to consider and respond to prior to the meeting of the funding committee. A copy of the anonymised external peer reviewer comments will also be shared with other external peer reviewers after the committee meeting has taken place. Please bear in mind that your comments could be released more widely if a Freedom of Information (FOI) request were successful.

Completing your review

For information about individual NIHR funding programmes you can follow the links below:

• Efficacy and Mechanism Evaluation (EME) Programme
• Health and Social Care Delivery Research (HSDR) Programme
• Health Technology Assessment (HTA) Programme
• Public Health Research (PHR) Programme
• Evidence Synthesis Programme

We will be seeking comments from a range of individuals, both from professional backgrounds and members of the public, to gain a range of opinions as to the merit of the proposed research. You need only review the sections relevant to your expertise or that you feel comfortable commenting on.

Where to start?

The Summary (in plain English) is always a good place to start followed by the detailed project description (upload document) which contains most of the information you will need. This should give you a good background to help you understand the proposal. There may also be other sections of the application form that may be helpful and viewed if necessary as well as additional upload documents, such as study flow diagrams and reference material which may help explain the study.

Most proposals will be in two parts:

The application form that includes detailed information about the team, the plain English summary and Public and Patient Involvement section.

The detailed project description (upload document) that follows a standard format and forms part of the proposal. You will also find a fuller account of the proposed project as well as more detail about the way the project will be done and its timelines.

Completing the reviewer assessment task

When reading the application please keep the following questions in mind. Throughout your review please try and identify issues that are major concerns and those that are fixable faults. Any additional comments are also welcome. Our external peer reviewers are NOT required to comment on any detailed budget information.

How will the research make a difference? In your experience, will the research, as described, produce or have the potential to lead to, findings that will enable change and benefit patients and the public? This change could impact on the public, patients, carers, health and social care practitioners, decision makers and providers of health and social care services.

Is the proposed research feasible from your perspective? Can it be successfully delivered as described in the application? If not, please explain which areas would need to be addressed and why. You may want to consider the proposed study approach, the acceptability to all participants or any potential barriers to the research being successful.

What else could the applicants do to improve the research proposed?

Considering the questions above please provide comments to support your score and explain your decision. The box will expand as you type into it. Please note the comments box has a limit of 10000 characters, which equates to around 2-3 pages once pulled through into the PDF document.

Although we don’t have a set word count, we do ask that your review is at least a couple of paragraphs long.

Summary score

In this section we ask you to provide a summary score. Please provide a summary score that reflects your overall assessment of the proposed research.

Scoring can often be challenging, particularly as you may be keen to see research into an important area go forward. It may be worth considering the following table when deciding on the score to give the proposal. Your score should reflect your overall assessment of the proposal.


6 - (Excellent) Proposed research can be funded as it stands
5 - (Good) Proposed research can be funded with minor changes
4 - (Good potential)There is much merit in this proposal, but it could be funded, perhaps after resubmission, with additional external support
3 - (Some merits) There are significant weaknesses in this proposal, but these could in principle be addressed.
2 - (Poor) Weak proposal
1 - (Extremely poor) Unsupportable proposal

Example Reviews

As a point of reference, the below are two reviews considered by the Funding Committee to be excellent examples. Some information has been redacted to ensure the anonymity of the reviewer.

Example Review 1 (Clinician)

The proposed study is ambitious, but is very well-planned and considered, with an experienced multi-disciplinary team that appears able to conduct a study of this scale and scope.

Strengths

The study team represents a collaboration between orthopaedic surgeons and rheumatologists, with representation from across a broad spectrum of orthopaedic sub-specialisms. I agree with the applicants that this will help to support study “buy-in” from researchers and boost study recruitment

PPIE is a very strong central element to this application - it is clear that the researchers have put patients at the heart of their study design. The plans for extensive PPIE input throughout the research cycle demonstrate that PPIE will continue to form a central pillar of the trial if funded.

Study visits are integrated within routine post-operative visit schedules, thus minimising visit burden for patients and boosting study retention.

Whilst the applicants have responded to previous review comments by using EQ-5D-5L as the primary outcome, they have also included RA disease activity by a widely-accepted measure (DAS28-CRP) as a secondary endpoint. This secondary outcome will be of great interest to rheumatology audiences and I agree with the applicants that this is a key outcome measure.

Weaknesses/areas for improvement

Major

The applicants include patients taking a very wide range of biologic and targeted synthetic DMARDs within their study design. The wide range of different mechanisms of action, combined with widely varying dosing intervals (e.g. as short as 12 hours for JAK inhibitors, versus 4 weeks for golimumab), could introduce considerable heterogeneity in study outcomes. Although patients will stop drugs one dosing interval prior to surgery, the following 14 day drug-free period represents quite contrasting periods of drug cessation between groups, which could affect flare rates (for example, 14 days = 112 half-lives for tofacitinib, versus 1 half-life for golimumab). Can the applicants describe how their analysis will account for this effect, in order to justify their choice of including such a wide range of different drug therapies?

The applicants describe that their analysis will be adjusted for type of orthopaedic intervention, which I think will be essentially (1) joint replacement vs. (2) joint non-replacement vs. (3) soft tissue. However, different types of joint replacement surgery could be expected to have differing speeds of recovery, which could differentially affect quality of life at 6 weeks (e.g. rehabilitation from hand joint replacement surgery would be quite different from hip joint surgery). Can the applicants present some reassurance in their analysis plan that different types of joint replacement surgery will be accounted for in their analysis plan, to justify the inclusion of a broad range of different orthopaedic procedures?

Minor

The main application text mentions a target recruitment of 600 participants, whereas the detailed plan describes a 500 patient target.

What about concomitant conventional synthetic DMARD therapy (e.g. methotrexate etc)? I presume this would continue throughout the operative period –important to explicitly state this.

Not clear why the main trial site recruitment period starts in July 2023, 4 months prior to the decision to commence the full trial in November 2023.

May I suggest that the applicants add “current/recent (<3 months) systemic steroid use” and“ previous history of native/prosthetic joint infection” as exclusion factors, as these are expected to be substantial confounding factors in the analysis.

What is the applicants’ approach to joint revision surgery? Revision surgery is longer, more technically challenging, and can be expected to have higher rates of complications (infective or noninfective). I would favour excluding patients undergoing revision surgery from the trial –if including such patients, the applicants need to clearly describe how this would be accounted for in their analysis plan.

I feel the 6-month exclusion period for rituximab is too short, given the long-lasting effect of this B cell depleting drug – I would favour at least 2 years exclusion period.

Example Review 2 (Statistical methods)

I was asked to provide statistical review of this application, so this review focuses on that. The methods for deconvolution of transcriptome measurements to infer immune cell proportions are adequately described and supported by validation studies.

The key statistical modelling problem is what to do with the data once the RNAseq measurements have been deconvoluted to cell types. Based on the papers cited, the dimensionality of the predictor(immune cell proportions) is about 30. The applicants will have fewer than 30 cases of the outcome of most interest (post-treatment relapse on RTX) but a larger set of related outcomes (relapse on treatment or in the azathioprine arm).

The statistical methods are alluded to only briefly in this application: "The final phase of analysis will be to validate data-driven hypotheses". There is no mention of how data from different time points will be combined. The applicants mention the problem of overfitting, but don't describe in detail how they will address it other than by seeking validation in independent datasets. It's preferable to control overfitting at the stage of training the model and to evaluate predictive performance before seeking external validation, as outlined below.

With this sample size (~30 relapse events in patients on RTX and ~30 predictors), it's essential to evaluate predictive performance by cross-validation, preferably taken to the limit of leave-one-out, so that nearly all the data are used to train each predictive model and all the data (every observation appears once in a test fold) are used to evaluate predictive performance). It's also crucial that the entire procedure for feature selection and model fitting is repeated on each training fold, so that the test folds contain data not seen before.

A separate use of cross-validation is to tune regularization parameters so as to control overfitting, as in the elastic net method mentioned by the applicants. For this, nested cross-validation should be used so that the evaluation of predictive performance is done after the regularization parameters have been learned within the training folds. A more efficient and flexible alternative is to use Bayesian methods that effectively learn the regularization parameters from the training data, making nested cross-validation unnecessary (see Piironen and Vehtari (2017). Sparsity information and regularization in the horseshoe and other shrinkage priors. https://arxiv.org/abs/1707.01694 and the R package "brms").

More generally, for this type of problem, Bayesian methods would allow more efficient and complete use of the data: for instance a hierarchical model for predicting relapse in patients on each drug would allow learning the model for patients on one drug to borrow strength from the model for patients on the other drug.

To maximize the use of the data, there should be a plan for combining the mRNAseq measurements at each time points and the relapses in each interval into a single model for dynamic prediction (which is what would be most useful in clinical practice). Sequential Bayesian updating would be a possible approach to this.

In summary, this is a very strong application but I would advise that the statistical analysis should be undertaken in close consultation with someone experienced in this type of predictive modelling (largeP, small N).

FAQs?

I am unable to open the application PDF

If you are unable to open the application PDF, it is usually the case that pop-ups are disabled in the browser you are using. Click here for guidance on enabling pop-ups for Mac and Windows.