AI and Racial and Ethnic Inequalities in Health and Care - Competition Brief
Advances in Artificial Intelligence and data-driven technologies present great potential for health and care. Development in AI could play an important role in improving the accuracy of diagnoses and screening; the understanding of biological, social and environmental elements of disease risk; and the quality and safety of healthcare. AI and data-driven technologies could also, in principle, be useful tools for addressing racial and ethnic disparities in health. At the same time, the design, development, and deployment of these technologies can be affected by structural racism, which can amplify bias and risk perpetuating inequalities for minority groups as evidenced in different domains and sectors. There is a very real risk that these issues could manifest in health and care and exacerbate existing racial and ethnic inequalities in health outcomes, unless the root causes leading to these problems are addressed.
To date, there has been insufficient research into the design, tools and frameworks needed to ensure these technologies do not exacerbate inequalities in health and have a positive impact on minority ethnic communities. Additionally, there has been little exploration of if/how AI developments could be used to address disparities in health and care, particularly in the UK.
In response to these gaps, the overall aim of this research call is to support the advancement of AI and data-driven technologies in health and care to improve health outcomes for minority ethnic populations in the UK.
- Understanding and enabling the opportunities to use AI to address inequalities
- to account for the health needs of minority ethnic communities in the UK
- Awards of £175,000 to £275,000
- to account for the health needs of minority ethnic communities in the UK
- Optimisingdatasets, and improving AI development, testing and deployment
- to ensure solutions work effectively for minority ethnic groups and do not exacerbate inequalities
- Awards of £200,000 to £500,000
- to ensure solutions work effectively for minority ethnic groups and do not exacerbate inequalities
Applicants can propose projects under one or the other sub-category. Both categories will fund projects of 12 to 24 months duration. Sub-category 1 is funded by the Health Foundation and sub-category 2 is funded by the NHS AI Lab; however, all projects will be overseen jointly by these organisations. We particularly encourage applications that factor in time/resources to work with the NHS AI Lab to integrate findings into guidance/ tools/ best-practice.
This one-stage funding call is open to UK-based higher education institutions, third sector organisations, charities, and NHS organisations or providers of NHS or social care services.
NHSX’s NHS AI Lab and The Health Foundation are partnering on this research call with the support of the National Institute for Health Research (NIHR).
There are two sub-categories to this call and applicants can propose projects under one or the other sub-category:
- Understanding and enabling opportunities to use AI to address health inequalities
- Optimising datasets, and improving AI development, testing, and deployment
The starting point for the development of this call was an acknowledgment of the racialised impact of deploying AI and predictive algorithms in health and care as evidenced by examples in the US (1). There is a risk that the use of AI and data-driven technologies could exacerbate existing racial and ethnic inequalities in health outcomes, and thus this call is focused on how to mitigate possible harm. At the same time, despite the recognised potential of these technologies for health and healthcare, there has been little exploration of whether and how they might be used to address disparities. The call therefore also encourages researchers to consider how AI and data-driven technologies could be leveraged to benefit people from minority ethnic groups and close gaps in health outcomes.
Racial and ethnic health inequalities in the UK
Structural inequities in health and healthcare have been illuminated by the Covid-19 pandemic, which has brought into sharp focus the disproportionate impact that disease can have on minority ethnic communities in the UK. A Public Health England review in 2019 of disparities in the risks and outcomes of Covid-19 found that people of Black and Asian ethnicities had between a 10 and 50 percent higher risk of death when compared to the White British population (2). Earlier in the pandemic, people of Black African or Black Caribbean ethnicity had a mortality rate from Covid-19 that was two to two and half times higher than for people of White ethnicity (3). According to the Office for National Statistics (4), the disparity in death rates could be explained in large part by demographic, geographic, and socioeconomic factors (including occupation, education and housing conditions); however, the ONS notes that while the gap lessens when accounting for these factors, it is still significant.
Covid-19 is not the first or only considerable disparity in racial health outcomes in the UK; for example, there are longstanding disparities in perinatal and maternal health outcomes as well as mental health outcomes (4). More generally, there is some international evidence to suggest that racial inequalities exist in terms of the onset, progression, and survival rates of illness (5, 6). Differential treatment or access to services could be a factor in these disparities, especially given the concerns voiced by some minority ethnic groups about their experiences of healthcare in the UK. The Parliamentary Joint Committee on Human Rights recently commissioned a survey to better understand the views, attitudes and perceptions of people from the Black community in the UK in relation to their human rights; this survey found that over 60 percent of Black people in the UK do not believe their health is as equally protected by the NHS compared to White people, and that Black women (78 percent) in particular are more likely to feel this way and to report experiences of unequal treatment (7).
The racialised impact of AI
Artificial intelligence (AI) refers to the science of making machines perform tasks generally thought to require human intelligence (8). It encompasses a range of approaches including machine learning, which describes computers learning to perform a specific task from examples, data, and experience (9). For the purposes of this research call AI is defined as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence” (10). As elaborated on by the Office for AI, AI generally involves machines using statistics to find patterns in large amounts of data and the ability to perform repetitive tasks with data without the need for constant guidance (10). This technology has the potential to be transformative in a range of sectors including healthcare as demonstrated by advancements in screening and diagnosis, drug discovery and personalised medicine, and operational efficiency (11). However, as promising as AI is, there are legitimate reasons to be concerned about whether this technology could deepen existing racial and ethnic health inequalities in the UK.
The Department for Health and Social Care, alongside many others in academia and the third sector, have cautioned that the technology could benefit some groups at the expense of others, highlighting biases in the data used for training algorithms (10,12,13). These biases might manifest because of the under-representation of ethnic minorities in training datasets, or due to the way data are coded, captured, combined and utilized for AI development. Bias can also arise because of preexisting structural inequalities and social and historical injustices. For example, the training data themselves may reflect societal prejudice which is then ‘baked-in’ to the resulting AI models.
While concerns about algorithmic bias are generally expressed about the data underpinning AI, structural inequalities and/or pre-existing prejudice may also impact the application areas chosen for AI research or development; the design of AI models (which, for example, might be based on over-simplified or inappropriate assumptions); and the implementation and deployment of AI solutions that might not factor pre-existing inequalities.
Algorithmic bias has been identified in a number of use cases for AI and other data-driven technologies in the US healthcare system and other sectors, such as in policing and criminal justice or in finance (14). These use cases have disproportionately disadvantaged minority ethnic patients, highlighting the need to specifically focus on addressing the racialised impact of AI within healthcare.
Tackling the racialised impact of AI and optimising AI for minority ethnic groups
In the broader field of data science, the entire analysis pipeline spanning design, input, analysis and application can be affected by and result in racism (15). As such the health data science and research community have a crucial role in tackling racism at each stage of the analytical pipeline to ensure their practice of data science does not reinforce existing social injustices and inequalities (10). Similarly, actions across each stage of the entire AI lifecycle are necessary to mitigate the risks of perpetuating bias and inequalities as well as increasing the opportunities to leverage AI to reduce disparities and improve health outcomes for minority ethnic groups.
NHS AI Lab highlight the importance of considering each step of this AI lifecycle in a white paper on how to best support and facilitate the use of AI-driven technologies within the health system (16). Broadly, this AI lifecycle, as illustrated by the UK Information Commissioner’s Office (17) comprises the stages of:
- use-case development
- training and test data procurement
- testing and validation
- monitoring and deployment.
Others have contextualised the challenges and considerations for equitable machine learning using the pipeline for health care model development spanning ‘problem selection’ to ‘post-deployment considerations’ (18). The entire ‘pipeline’ or ‘lifecycle’ of technology development offers a number of opportunities for improvement that could help to mitigate risks and increase the likelihood that AI solutions will benefit ethnic minorities.
Use-case development and design
AI could in principle be used to examine and address inequalities in health. In a recent study, Pierson et al. used deep learning to measure the severity of knee osteoarthritis. The researchers used knee X-ray images to train a deep learning tool to predict patients’ experienced pain (19). Relative to standard measures of severity graded by radiologists, the algorithmic approach was better at accounting for disparities in osteoarthritis pain, including racial disparities in pain as well as disparities by income and by education (19). The study is one example examining how AI could be used to identify and address disparities in healthcare, a question which on the whole has received very little attention to date. A range of factors may underpin the reasons why, from the underrepresentation of minority groups in the technology and AI workforce, lack of financial incentives, to insufficient engagement with affected communities and experts.
Data procurement, model building, testing and validation
The availability, accessibility and representativeness of datasets can impact outcome of AI for minority groups. Ibrahim and colleagues have described this issue as ‘health data poverty’ whereby ‘a scarcity of data that are adequately representative’ result in the ‘inability for individuals, groups, or populations to benefit from a discovery or innovation’ (20). As well as underrepresented datasets, the design of algorithms may perpetuate bias, for example because of the inappropriate inclusion of data types, or because data or relevant context on the systemic factors contributing to health disparities are overlooked. The examples in Table 1 describes three challenges when designing algorithms and considering the collection, selection, and incorporation of data with race equality in mind. Another area for improvement lies in the testing and validation of models to ensure that they are as accurate and effective for ethnic minority groups as they are for the majority population.
Expanding on challenges of bias in training data
Under-representation of ethnic minorities in training data
The under-representation of ethnic minorities in training data has been documented as a problem with a range of AI products in the US that entail computer vision. For example, ‘Gender Shades’, a 2018 study of gender classification algorithms, found that the facial recognition technology underpinning these systems was significantly less accurate at identifying darker-skinned females than lighter-skinned males (21). It was argued that since computer vision is being used in high-stakes sectors such as healthcare and law enforcement, “more work needs to be done in benchmarking vision algorithms for various demographic and phenotypic groups.”
In healthcare specifically, there is great potential for this technology to be applied to identify melanoma; however, concerns have been raised about the data available to train algorithms for detecting melanomas on darker skin. For example, it has been highlighted that the International Skin Imaging Collaboration: Melanoma Project, one of the largest, open-source, public-access archives of pigmented lesions, is mainly comprised of data collected from fairer-skinned populations in the United States, Europe, and Australia (22). Regardless of how advanced the machine learning algorithm may be, without representative datasets that are appropriately labelled and categorised, the algorithm is likely to underperform when applied to images of lesions on darker skin (23).
Use of historically biased data
While improving the representativeness of datasets is critical to the effectiveness of algorithms in some cases, the inclusion of data on race and ethnicity is not always justified and can skew outcomes, particularly if there are issues of historical bias pertaining to data collection and selection. The use of risk calculators in fields such as obstetrics has disadvantaged Black and Hispanic patients based on the inclusion of race and ethnicity data without sufficient evidence of its relevance. For example, in obstetrics, the use of the Vaginal Birth after Caesarean (VBAC) algorithm predicts the risk posed by a trial of labour for a patient who has previously undergone a caesarean section, and it predicts a lower likelihood of success for anyone identified as African American or Hispanic. It is not clear why these variables are included above others, but it has led to some African American and Hispanic women being dissuaded against vaginal delivery by clinicians basing their recommendations on the VBAC (24). This is particularly concerning given that vaginal births have fewer complications than caesareans and Black women already have higher rates of maternal mortality (25).
Although these calculators are not powered by AI, they are also premised on predictive algorithms and, thus, the learning is applicable when considering more advanced technologies with similar purposes.
Unjustified inclusion of factors that correlate with race
Algorithms do not always use race or ethnicity data, and thus do not necessarily prompt questions about race correction. However, even algorithms that exclude race and ethnicity data can produce racially biased outcomes if they include other variables that correlate with race. For example, in a study of an algorithm used widely in US hospitals to improve operational efficiency, Black patients were discriminated against even though race or ethnicity was not a factor in determining whether patients required follow-up care (1). It transpired that this was because the algorithm used health costs as a proxy for health needs, but since less money is spent on Black patients who have the same level of need as White patients the algorithm was less likely to identify Black patients for follow-up care. This example indicates that overcoming algorithmic bias is not as straightforward as simply improving the representativeness of datasets because there may be issues of structural bias. There must be greater consideration of how to account for intersectionality and the social determinants of health when developing AI models.
Deployment and monitoring
Once models are deployed in clinical or other health and care settings, a lack of oversight around their ethical use and assessment of the impact on minority groups may lead to potential harm going undetected. Evaluation and ongoing monitoring of AI models post deployment is therefore key. Chen et al. specify that post-deployment requires “careful performance reporting, auditing generalisability, documentation, and regulation”, which would enable various stakeholders to track and respond to any concerns about how AI solutions are impacting minority ethnic groups (18).
Aims of the research call
The overall aim of this research call is to support the advancement of AI-driven technologies in health and care to improve health outcomes for minority ethnic populations in the UK.
As the NHS AI Lab seeks to accelerate the safe, ethical and effective adoption of AI across health and social care, there is recognition of the risks that algorithmic bias can pose in terms of potentially exacerbating these existing health inequalities. The NHS AI Lab has launched the AI Ethics Initiative to invest in research and support practical interventions that complement and strengthen existing efforts to validate, evaluate, and regulate AI-driven technologies in health and care, with a focus on countering inequalities. The Health Foundation’s vision for data-driven technology and analytics in the UK, is one where everyone’s health and care benefits from innovation in this space. This means ensuring that advances in data-driven technologies and analytics do not exacerbate existing inequalities, and that investment and advances in these technologies benefit underserved communities and are used to help reduce health inequalities.
It’s important to consider how emerging AI solutions might deepen these inequalities, how they could be leveraged to help address these inequalities, and how we ensure that these technologies do not introduce new inequalities.
We particularly encourage applications that factor in time/resources to work with the NHS AI Lab to integrate findings into guidance/ tools/ best-practice.
There are two categories to this call outlined in detail below and applicants are expected to respond to one of the two sub-categories.
1. Understanding and enabling opportunities to use AI to address racial health inequalities
This first category focuses on understanding what the opportunities are to use AI to improve the health outcomes of minority ethnic communities in the UK, and how they could be leveraged.
2. Optimising datasets, and improving AI development, testing, and deployment
This second category focuses on creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities, including through mitigating the risks of perpetuating and entrenching racial health inequalities.
Sub-Category 1: Understanding and enabling the opportunities to use AI to address inequalities
Innovation in AI and deployment of existing AI technologies may not benefit minority ethnic communities to the extent that it benefits the White majority population for a number of reasons including a lack of diversity at a strategic level and within research teams, as well as weak incentives to develop products for smaller markets. The private sector and larger institutions are more likely to develop products that will offer the largest return on investment, which means that the specific health needs of minority ethnic communities may not necessarily be met. Despite significant investment in the field, there has been limited exploration of if and how AI might be utilized to address disparities and improve the health of minority ethnic communities. The core challenge here is how to better inform the research and development of AI solutions in order to enable innovation in AI that benefits the health needs of minority ethnic communities and/or can reduce disparities in health outcomes.
Focus of research category
Although there may be a number of different ways of tackling this challenge, including initiatives that seek to improve the diversity of research and development teams in academia and/or the technology workforce for example, the focus of this category will be on how to encourage approaches to innovation that are informed by the health needs of underserved communities and/or are bottom-up in nature and how to better understand the opportunities for AI to respond to the health needs of different minority ethnic groups.
Research in this category might entail analysis (including evidence synthesis or exemplar development) of how AI could be employed to address unmet health needs of minority ethnic communities, how it could be used to tackle inequalities in healthcare, or how existing solutions, such as particular clinical decision support tools, could be adapted to improve accuracy for minority ethnic patients. It could also entail community-engaged research and trialling initiatives that incentivise innovation in AI in health and care from the perspective of minority ethnic communities, for example, through patient and public involvement and engagement (PPIE) or ethnographic studies to further an understanding of the specific challenges these communities face and/or their interactions with data-driven technology, in order to shape health outcomes.
Potential approaches and areas for investigation
The following are examples of possible research and outputs, but this is far from an exhaustive list. We will consider other project ideas and outputs that meet the stated objectives and focus of this category: understanding what the opportunities are to use AI to improve the health outcomes of minority ethnic communities in the UK, and how they could be leveraged.
Evidence synthesis (qualitative and quantitative research) – to review the existing evidence base and examine if/how AI can be employed to address unmet health needs or redirected to better work for minority ethnic communities, for example.
Community-engaged research – to understand problems faced by minority ethnic communities and areas where AI could make an impact. Projects could involve a range of methods from qualitative, quantitative, or mixed methods research, including PPIE and ethnographic research methods to incorporate the perspectives of specific minority ethnic communities, and/or clinicians, innovators and other experts and stakeholders with an interest in addressing ethnic disparities in health. Research could also consider how to involve communities in AI research and development, for example.
Exemplar development – case studies or research that examine uses of AI to address inequalities in health and care and generates learnings to inform future practice around how to effectively build technology to address unmet health needs, for example.
Possible outputs and their value
The following are some examples of possible outputs of value, but this is far from an exhaustive list.
- Research publications or reports that articulate the health needs of minority ethnic communities that are most amenable to AI solutions, or that detail potential innovations that could be adapted to work for minority ethnic communities.
- Research publication or reports that share transferable learnings from exemplar tech development to address inequalities in health and care, or that share recommendations for policy makers or innovators or other stakeholders.
- Resources that demonstrate how minority ethnic communities and other stakeholders can be meaningfully engaged and involved in the development of AI.
These outputs could aid in reorienting future funding and innovation and in aligning incentives to address the unmet health needs of minority communities. It could foster greater involvement of stakeholders and affected communities in the development of technology and generate tools or knowledge for designing technology to address the health needs of minority groups.
Sub-Category 2: Optimising datasets, and improving AI development, testing and deployment
As set out in the ‘Context’ section of the competition brief, there are different ways that bias can influence the datasets integral to model development, resulting in AI solutions that do not work effectively or accurately for minority ethnic groups. Potential problems may relate to the under-representation of minority ethnic groups in training datasets, but could also pertain to issues of structural bias, such as the unjustified inclusion of factors that correlate with race. There is a clear need for interventions that prevent or address complex issues of bias in datasets.
In addition to addressing bias within datasets, it is necessary to target potential bias during the development, testing and deployment stages of the pipeline. Chen et al. note that “just as data are not neutral, algorithms are not neutral” and describe algorithm development as an opportunity to make decisions about the construction of the underlying computation for the machine learning model or, in other words, the critical components of the algorithm (18). The performance of algorithms for minority ethnic groups must also be effectively evaluated and monitored to ensure that these algorithms continue to work as intended for these populations and/or that they don’t perpetuate inequalities.
Focus of research category
This element of the research call will seek to support approaches that can aid in improving the quality and availability of datasets and their appropriate application for minority ethnic groups, as well as focusing on research that can support improvements in the performance, testing, and monitoring of AI models across patient populations.
This could entail work that helps to improve the quality and representativeness of datasets for ‘training’ AI; approaches to facilitate the discoverability of diverse datasets and their appropriate application; and research that can help to inform guidelines, evaluation, and practice to address bias and promote equity in the stages of AI development, testing and deployment.
Potential approaches and areas for investigation
The following are examples of possible research and outputs, but this is far from an exhaustive list. We will consider other project ideas and outputs that meet the stated objectives and focus of this category: creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities, including through mitigating the risks of perpetuating and entrenching racial health inequalities.
Community resource development such as efforts to catalogue publicly available datasets with detailed descriptions of their composition, strengths, and limitations for repurposing purposes.
AI dataset or model optimisation, including approaches to improve the capture, quality, labelling, classification or standardisation of health datasets with respect to ethnic minorities.
Interdisciplinary research to improve understanding of how to reflect the complex intersection between social determinants of health and other demographic and clinical features in AI models, for example.
Technical analyses to evaluate the effectiveness of computational approaches to prevent racial bias in modelling, for example.
Research to improve evaluation processes, including research to inform guidelines aimed at enhancing the performance of models in sub-populations and increasing transparency about the characteristics of the training data used or, to determine how to factor equity dimensions into evaluations standards for AI.
Possible outputs and their value
- Resources that enhance understanding of how to improve the quality and utilisation of datasets for training AI models.
- Research publications or reports that document new findings or applications of novel techniques, e.g. computational approaches to avoid perpetuating inequalities by design and/or approaches to promote equity by design.
- Resources that can inform best-practice, guidance, and evaluation frameworks for AI development, testing, and/or deployment with racial equity in mind.
Finally, while the focus on the research call is on optimizing AI for minority ethnic groups, the call does not preclude projects whose activities and findings have applicability across different population groups in addition to minority ethnic populations. We are interested in learning how the findings of the proposed research activities could be applicable to other underserved groups.
Those submitting applications are also asked to consider:
- How you will ensure that the innovation enhances equity of access (e.g. takes account of underserved ethnic or economic groups) and helps the NHS towards its target to reach net zero carbon.
- How the proposed research will contribute to addressing inequalities in health among UK minority ethnic populations and circumvent well-documented challenges with the development, design and implementation of AI and data-driven algorithms which could amplify bias and risk exacerbating existing inequalities for minority groups.
- What measures the project will have in place to ensure equal and meaningful opportunities for people from diverse backgrounds, particularly those known to be under-represented such as, minority ethnic groups, women, and people living with a disability, to be involved throughout the project.
- The expected impact of the project on diverse groups, potential long-term consequences of the research and the approach to managing risks regarding the impact of the work.
- Where applicable, consideration and documentation about the provenance of data to be used, including any privacy risks and data security measures.
- While the focus on the research call is on AI and minority ethnic groups, how the findings of the proposed research activities might be applicable to other groups that might be underserved or those with protected characteristics.
- How the proposed solution will impact on the care system and how the system will need to be changed (including people, processes and culture) in order to deliver system-wide benefits.
- How you will ensure that the innovation will be acceptable to patients (and their families and wider support network) and to health and social care workers. How you could ensure these groups will be involved in the design of a solution and its development.
Useful Information for Applicants
Research excluded from this competition
- Basic Science (TLR1)
- Clinical or drug efficacy research, discovery, development, clinical trials
- Any technology which would negatively impact staff workloads
- Research which does not address one of the two sub-categories
For any digital intervention, the NICE Digital Health Technology Framework should be consulted and your application should evidence your plan to meet the appropriate evidence guidelines. This comprised both clinical effectiveness and economic evaluation.
In addition, please consult the NHSX guidelines for “Designing and building products and services”, which includes the latest links to all relevant standards, guidelines and consultations. Data-driven health and care technology for artificial intelligence systems used by the NHS will need compliance with the AI Code of Conduct.
The funders support NHS England and NHS Improvement’s commitment to minimise health inequalities and realise net-zero emissions by 2040.
The one stage competition is open to UK-based Higher Education Institution (HEI), third sector organisation Charity or NHS organisation or provider of NHS or social care services.
The competition opens on Wednesday 24 March 2021. The deadline for applications is 1pm on Wednesday 28 April 2021.
The AI and Racial and Ethnic Inequalities in Health and Care competition is enabled by NIHR, using their research funding knowledge, expertise, and robust methodology. All applications should be made using the application portal which can be accessed through the Programme Management Office Research Management System (PMO RMS). Applicants are invited to consult this competition brief document and the applicant guidance before submitting their application.
The competition launch event was held on the 10 March 2021 and provided an opportunity for prospective applicants to hear from both the NHSX’s NHS AI Lab and from the Health Foundation to find out more about the competition and the competition sub-categories. The recording of the event is available on NHSX's NHS AI Virtual Hub (once you have registered).
Please complete your application using the PMO RMS and submit by 1pm on Wednesday 28 April 2021.
15 February 2021
10 March 2021
24 March 2021
Deadline for applications
1pm 28 April 2021
01 October 2021
For more information on this competition, visit the NIHR AI webpage and select ‘Racial and Ethnic Inequalities’ tab.
For information, events and to engage with the AI Community, sign up to NHSX’s NHS AI Virtual Hub.
For any enquiries e-mail: firstname.lastname@example.org
For more information about NHSX’s NHS AI Lab and The Health Foundation, please visit:
- Obermeyer Z, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019; 366, 6464: 447-453. [Accessed 19th March 2021]
- Public Health England. Disparities in the risk and outcomes of COVID-19. 2020. [Accessed 19th March 2021]
- Office for National Statistics. Why have Black and South Asian people been hit hardest by COVID-19? 2020. [Accessed 19th March 2021]
- Fernandez Turienzo C, Newburn M, Agyepong A, et al. Addressing inequities in maternal health among women living in communities of social disadvantage and ethnic diversity. BMC Public Health. 2021; 21, 176.[Accessed 19th March 2021]
- Paradies Y, et al. Racism as a determinant of health: A systematic review and meta-analysis. PLoS ONE. 2015; 10, 9. [Accessed 22nd March 2021]
- Williams D. Miles to go before we sleep: Racial inequalities in health. Journal of health and Social Behavior. 2012; 53, 3: 279-295. [Accessed 22nd March 2021]
- Henry C, Imafidon K, McGarry N. The Black Community and Human Rights. ClearView Research; 2020. [Accessed 19th March 2021]
- Minsky M. Semantic information processing. The MIT Press; 1968.
- The Royal Society. Machine learning: the power and promise of computers that learn by example. London: The Royal Society; 2017. [Accessed 19th March 2021]
- Department of Health & Social Care. A guide to good practice for digital and data-driven health technologies. GOV.UK; 2021. [Accessed 19th March 2021]
- Joshi I, Morley J. Artificial Intelligence: How to get it right. London: NHSX; 2019. [Accessed 19th March 2021]
- Ravi B, et al. Addressing Bias in Artificial Intelligence in Health Care. JAMA. 2019; 322, 24: 2377-2378. [Accessed 22nd March 2021]
- Leslie D, et al. Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ. 2021; 372: 304. [Accessed 22nd March 2021]
- Centre for Data Ethics and Innovation. Review into bias in algorithmic decision-making. 2020. [Accessed 22nd March 2021]
- Knight H.E, et al. Challenging racism in the use of health data. The Lancet Digital Health. 2021; 3,3: 144-146. [Accessed 19th March 2021]
- Global Digital Health Partnership. AI for healthcare: Creating an international approach together. NHSX; 2020. [Accessed 19th March 2021]
- Binns R, Gallo V. An overview of the Auditing Framework for Artificial Intelligence and its core components. Information Commissioner’s Office; 2021. [Accessed 19th March 2021]
- Chen I, et al. Ethical machine learning in health care. Annual Reviews in Biomedical Data Science. 2020. [Accessed 19th March 2021]
- Pierson E, et al. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nature Medicine. 2021; 27: 136-140.[Accessed 19th March 2021]
- Ibrahim H, et al. Health data poverty: an assailable barrier to equitable digital health care. The Lancet Digital Health. 2021. [Accessed 19th March 2021]
- Buolamwini J, Gebru T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research. 2018; 81: 1-15. [Accessed 19th March 2021]
- Adamson AS, Smith A. Machine Learning and Health Care Disparities in Dermatology. JAMA Dermatol. 2018; 154, 11: 1247-1248. [Accessed 22nd March 2021]
- Kamulegeya LH, et al. Using artificial intelligence on dermatology conditions in Uganda: A case for diversity in training data sets for machine learning. 2019. [Accessed 19th March 2021]
- Reveal. Reproducing racism. News transcript; 2020. [Accessed 19th March 2021]
- Vyas DA, Jones DS, Meadows AR, Diouf K, Nour NM, Schantz-Dunn J. Challenging the use of race in the vaginal birth after caesarean section calculator. Women’s Health Issues. 2019; 29: 201-4.