Опубликована: Ноя. 5, 2024
Язык: Английский
Опубликована: Ноя. 5, 2024
Язык: Английский
Frontiers in Artificial Intelligence, Год журнала: 2024, Номер 7
Опубликована: Ноя. 25, 2024
Modern artificial intelligence (AI) solutions often face challenges due to the “black box” nature of deep learning (DL) models, which limits their transparency and trustworthiness in critical medical applications. In this study, we propose evaluate a scalable approach based on transition matrix enhance interpretability DL models signal image processing by translating complex model decisions into user-friendly justifiable features for healthcare professionals. The criteria choosing interpretable were clearly defined, incorporating clinical guidelines expert rules align outputs with established standards. proposed was tested two datasets: electrocardiography (ECG) arrhythmia detection magnetic resonance imaging (MRI) heart disease classification. performance compared annotations using Cohen’s Kappa coefficient assess agreement, achieving coefficients 0.89 ECG dataset 0.80 MRI dataset. These results demonstrate strong underscoring reliability providing accurate, understandable, explanations decisions. scalability suggests its potential applicability across various domains, enhancing generalizability utility while addressing practical ethical considerations.
Язык: Английский
Процитировано
1Опубликована: Ноя. 5, 2024
Язык: Английский
Процитировано
0