An Investigation into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features DOI

Mohamed Huti,

Tiarna Lee, Elinor J. Sawyer

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 225 - 234

Published: Jan. 1, 2023

Language: Английский

Data leakage inflates prediction performance in connectome-based machine learning models DOI Creative Commons
Matthew Rosenblatt, Link Tejavibulya, Rongtao Jiang

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: Feb. 28, 2024

Abstract Predictive modeling is a central technique in neuroimaging to identify brain-behavior relationships and test their generalizability unseen data. However, data leakage undermines the validity of predictive models by breaching separation between training Leakage always an incorrect practice but still pervasive machine learning. Understanding its effects on can inform how affects existing literature. Here, we investigate five forms leakage–involving feature selection, covariate correction, dependence subjects–on functional structural connectome-based learning across four datasets three phenotypes. via selection repeated subjects drastically inflates prediction performance, whereas other have minor effects. Furthermore, small exacerbate leakage. Overall, our results illustrate variable underscore importance avoiding improve reproducibility modeling.

Language: Английский

Citations

33

Demographic bias in misdiagnosis by computational pathology models DOI
Anurag Vaidya, Richard J. Chen, Drew F. K. Williamson

et al.

Nature Medicine, Journal Year: 2024, Volume and Issue: 30(4), P. 1174 - 1190

Published: April 1, 2024

Language: Английский

Citations

22

Understanding Biases and Disparities in Radiology AI Datasets: A Review DOI
Satvik Tripathi,

K. R. Gabriel,

Suhani Dheer

et al.

Journal of the American College of Radiology, Journal Year: 2023, Volume and Issue: 20(9), P. 836 - 841

Published: July 16, 2023

Language: Английский

Citations

28

A scoping review of fair machine learning techniques when using real-world data DOI Creative Commons
Yu Huang, Jingchuan Guo, Wei‐Han Chen

et al.

Journal of Biomedical Informatics, Journal Year: 2024, Volume and Issue: 151, P. 104622 - 104622

Published: March 1, 2024

The integration of artificial intelligence (AI) and machine learning (ML) in health care to aid clinical decisions is widespread. However, as AI ML take important roles care, there are concerns about associated fairness bias. That is, an tool may have a disparate impact, with its benefits drawbacks unevenly distributed across societal strata subpopulations, potentially exacerbating existing inequities. Thus, the objectives this scoping review were summarize literature identify gaps topic tackling algorithmic bias optimizing AI/ML models using real-world data (RWD) domains. We conducted thorough techniques for assessing model when RWD focus lies on appraising different quantification metrics accessing fairness, publicly accessible datasets research, mitigation approaches. identified 11 papers that focused applications. current research mitigating issues limited, both terms disease variety applications, well accessibility public research. Existing studies often indicate positive outcomes pre-processing address There remain unresolved questions within field require further which includes pinpointing root causes models, broadening use exploring implications healthcare settings, evaluating addressing multi-modal data. This paper provides useful reference material insights researchers regarding reveals field. Fair burgeoning requires heightened cover diverse applications types RWD.

Language: Английский

Citations

16

Machine Learning and Bias in Medical Imaging: Opportunities and Challenges DOI
Amey Vrudhula, Alan C. Kwan, David Ouyang

et al.

Circulation Cardiovascular Imaging, Journal Year: 2024, Volume and Issue: 17(2)

Published: Feb. 1, 2024

Bias in health care has been well documented and results disparate worsened outcomes for at-risk groups. Medical imaging plays a critical role facilitating patient diagnoses but involves multiple sources of bias including factors related to access modalities, acquisition images, assessment (ie, interpretation) data. Machine learning (ML) applied diagnostic demonstrated the potential improve quality imaging-based diagnosis precision measuring traits. Algorithms can leverage subtle information not visible human eye detect underdiagnosed conditions or derive new disease phenotypes by linking features with clinical outcomes, all while mitigating cognitive interpretation. Importantly, however, application ML either reduce propagate bias. Understanding gain as risks requires an understanding how what models learn. Common propagating arise from unbalanced training, suboptimal architecture design selection, uneven models. Notwithstanding these risks, may yet be across 3A’s (access, acquisition, assessment) patients. In this review, we present framework balance opportunities challenges minimizing medical imaging, current approaches specific considerations should made part efforts maximize all.

Language: Английский

Citations

12

Annual Research Review: ‘There, the dance is – at the still point of the turning world’ – dynamic systems perspectives on coregulation and dysregulation during early development DOI Creative Commons
Sam Wass, Emily Greenwood, Giovanni Esposito

et al.

Journal of Child Psychology and Psychiatry, Journal Year: 2024, Volume and Issue: 65(4), P. 481 - 507

Published: Feb. 23, 2024

During development we transition from coregulation (where regulatory processes are shared between child and caregiver) to self‐regulation. Most early coregulatory interactions aim manage fluctuations in the infant's arousal alertness; but over time, become progressively elaborated encompass other functions such as sociocommunicative development, attention executive control. The fundamental of is help maintain an optimal ‘critical state’ hypo‐ hyperactivity. Here, present a dynamic framework for understanding child–caregiver context psychopathology. Early involve both passive entrainment, through which child's state entrains caregiver's, active contingent responsiveness, caregiver changes their behaviour response behaviours child. Similar principles, interactive asymmetric contingency, drive joint maintenance epistemic states well arousal/alertness, emotion regulation development. We describe three ways can develop atypically, conditions Autism, ADHD, anxiety depression. most well‐known these insufficient leading reduced synchrony, has been shown across range modalities different disorders, target current interventions. also evidence that excessive responsiveness synchrony some circumstances. And show positive feedback develop, mutually amplificatory further critical state. discuss implications findings future intervention research, directions work.

Language: Английский

Citations

9

Advancing Fairness in Cardiac Care: Strategies for Mitigating Bias in Artificial Intelligence Models Within Cardiology DOI

Alexis Nolin-Lapalme,

Denis Corbin, Olivier Tastet

et al.

Canadian Journal of Cardiology, Journal Year: 2024, Volume and Issue: 40(10), P. 1907 - 1921

Published: May 11, 2024

Language: Английский

Citations

7

The future of machine learning for small-molecule drug discovery will be driven by data DOI
G. J. DURANT, Fergus Boyles, Kristian Birchall

et al.

Nature Computational Science, Journal Year: 2024, Volume and Issue: 4(10), P. 735 - 743

Published: Oct. 15, 2024

Language: Английский

Citations

5

Artificial intelligence for dementia prevention DOI Creative Commons
Danielle Newby, Vasiliki Orgeta, Charles R. Marshall

et al.

Alzheimer s & Dementia, Journal Year: 2023, Volume and Issue: 19(12), P. 5952 - 5969

Published: Oct. 14, 2023

Abstract INTRODUCTION A wide range of modifiable risk factors for dementia have been identified. Considerable debate remains about these factors, possible interactions between them or with genetic risk, and causality, how they can help in clinical trial recruitment drug development. Artificial intelligence (AI) machine learning (ML) may refine understanding. METHODS ML approaches are being developed prevention. We discuss exemplar uses evaluate the current applications limitations prevention field. RESULTS Risk‐profiling tools identify high‐risk populations trials; however, their performance needs improvement. New risk‐profiling trial‐recruitment underpinned by models be effective reducing costs improving future trials. inform drug‐repurposing efforts prioritization disease‐modifying therapeutics. DISCUSSION is not yet widely used but has considerable potential to enhance precision Highlights practice. Causal insights needed understand over lifespan. AI will personalize risk‐management could target specific patient groups that benefit most

Language: Английский

Citations

12

Neuroimaging data repositories and AI-driven healthcare—Global aspirations vs. ethical considerations in machine learning models of neurological disease DOI Creative Commons

Christine Lock,

Nicole Si Min Tan,

Ian James Long

et al.

Frontiers in Artificial Intelligence, Journal Year: 2024, Volume and Issue: 6

Published: Feb. 19, 2024

Neuroimaging data repositories are data-rich resources comprising brain imaging with clinical and biomarker data. The potential for such to transform healthcare is tremendous, especially in their capacity support machine learning (ML) artificial intelligence (AI) tools. Current discussions about the generalizability of tools provoke concerns risk bias-ML models underperform women ethnic racial minorities. use ML may exacerbate existing disparities or cause post-deployment harms. Do neuroimaging ML/AI-driven discoveries, have both accelerate innovative medicine harden gaps social inequities neuroscience-related healthcare? In this paper, we examined ethical ML-driven modeling global community neuroscience needs arising from amassed within repositories. We explored two parts; firstly, a theoretical experiment, argued South East Asian-based repository redress imbalances. Within context, then considered framework toward inclusion vs. exclusion migrant worker population, group subject inequities. Secondly, created model simulating impact variations presentation anosmia risks COVID-19 altering structural findings; performed mini AI ethics experiment. interrogated an actual pilot dataset (n = 17; 8 non-anosmic (47%) 9 anosmic (53%) using clustering model. To create simulation model, bootstrapped resample amplify dataset. This resulted three hypothetical datasets: (i) matched 68; 47% anosmic), (ii) predominant 66; 73% disproportionate), (iii) 76% disproportionate). found that differing proportions same cohorts represented each altered not only relative importance key features distinguishing between them but even presence absence features. main objective our experiment was understand if ML/AI methodologies could be utilized modelling disproportionate datasets, manner term "AI ethics." Further work required expand approach proposed here into reproducible strategy.

Language: Английский

Citations

4