Published: 25 September 2023
A group of leading research funders have agreed on guidance for the use of generative AI tools in assessing funding applications.
The group agreed in a joint statement that generative AI tools must not be used in peer-reviewing grant applications. If generative AI is used in other contexts, such as preparing funding applications, it must be clearly cited and acknowledged.
The statement was agreed in response to the rise of generative AI tools like ChatGPT. These tools allow large sections of human-like text and images to be created from prompts. Generative AI tools can be helpful in some contexts such as assisting neurodivergent researchers and reducing language barriers. But there are risks these tools could compromise confidentiality and research integrity if used to write peer review feedback.
The statement sets consistent standards on generative AI tools in funding applications and assessment across research funding organisations in the UK.
Signatories to the statement are members of the Research Funders Policy Group and include:
- The NIHR
- The Association of Medical Research Charities (AMRC)
- Cancer Research UK
- The British Heart Foundation
- Royal Academy of Engineering, Royal Society
- UK Research and Innovation (UKRI)
- The Wellcome Trust
Director of Strategic Operations at NIHR, Quinton Newell, said:
“We strive to ensure all our research is underpinned by the highest standards of rigour and integrity, so the public can have confidence in it. NIHR is delighted to join some of the UK's largest research funders in working together to help protect the integrity of applications across the research landscape.
“While the use of AI has huge potential in healthcare, we must ensure funding is awarded transparently, based on merit and has the potential to improve lives. By ensuring the accuracy and honesty of applications, we help uphold the credibility of research, prevent misuse of resources and maintain public confidence.”