AI explainability in oculomics: how it works, its role in establishing trust, and what still needs to be addressed DOI Creative Commons
Songyang An, Kelvin Yi Chong Teo, Michael V. McConnell

et al.

Progress in Retinal and Eye Research, Journal Year: 2025, Volume and Issue: unknown, P. 101352 - 101352

Published: March 1, 2025

Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable predicting range systemic diseases from retinal images. Unlike traditional disease detection AI models which trained on well-recognised biomarkers, or "oculomics" use often poorly characterised biomarkers to arrive at their predictions. As the phenotype oculomics may not be intuitive, clinicians rely developers' explanations how these work order understand them. The discipline understanding employs two similar but distinct terms: Explainable and Interpretable (iAI). describes holistic functioning an system, including its impact potential biases. concentrates solely examining workings algorithm itself. iAI tools are, therefore, what clinician must if they works whether predictions reliable. developers can delineated into broad categories: Intrinsic methods improve transparency through architectural changes, post-hoc explain via external algorithms. Currently, methods, class activation maps particular, far more widely used than other techniques limitations especially when applied models. Aimed clinicians, we examine key work, designed do, AI. We conclude by discussing combining existing with novel approaches could allow better reassure results issued

Language: Английский

AI explainability in oculomics: how it works, its role in establishing trust, and what still needs to be addressed DOI Creative Commons
Songyang An, Kelvin Yi Chong Teo, Michael V. McConnell

et al.

Progress in Retinal and Eye Research, Journal Year: 2025, Volume and Issue: unknown, P. 101352 - 101352

Published: March 1, 2025

Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable predicting range systemic diseases from retinal images. Unlike traditional disease detection AI models which trained on well-recognised biomarkers, or "oculomics" use often poorly characterised biomarkers to arrive at their predictions. As the phenotype oculomics may not be intuitive, clinicians rely developers' explanations how these work order understand them. The discipline understanding employs two similar but distinct terms: Explainable and Interpretable (iAI). describes holistic functioning an system, including its impact potential biases. concentrates solely examining workings algorithm itself. iAI tools are, therefore, what clinician must if they works whether predictions reliable. developers can delineated into broad categories: Intrinsic methods improve transparency through architectural changes, post-hoc explain via external algorithms. Currently, methods, class activation maps particular, far more widely used than other techniques limitations especially when applied models. Aimed clinicians, we examine key work, designed do, AI. We conclude by discussing combining existing with novel approaches could allow better reassure results issued

Language: Английский

Citations

0