
Bioengineering, Journal Year: 2025, Volume and Issue: 12(6), P. 558 - 558
Published: May 22, 2025
In recent years, deep learning has shown promise in automating heart-sound classification. Although this approach is fast, non-invasive, and cost-effective, its diagnostic accuracy still mainly depends on the clinician’s expertise, making it particularly challenging to detect rare or complex conditions. This study motivated by two key concerns field of First, we observed that automatic segmentation algorithms—commonly used for data augmentation—produce varying outcomes, raising about both process resulting classification performance. Second, noticed inconsistent scores across different pretrained models, prompting need interpretable explanations validate these results. We argue without interpretability support reported metrics, can be misleading because ambiguity how training interact with models. Specifically, remains unclear whether models classify spectrogram images—generated from signals—in a way aligns clinical reasoning, where experts focus specific components heart cycle, such as S1, systole, S2, diastole. To address this, applied explainable AI (XAI) techniques primary objectives: (1) assess model truly focuses clinically relevant features, thereby allowing results verified trusted, (2) investigate incorporating attention mechanisms improve performance model’s meaningful segments signal. best our knowledge, first conducted manually segmented dataset, which objectively evaluates behavior using XAI explores enhancement combining employ Grad-CAM method visualize gain insights into decision-making process. The experimental show integrating multi-head significantly improves interpretability. Notably, ResNet50 achieved an 97.3%, outperforming those baseline SE-enhanced Moreover, mean intersection over union (mIoU) increased 75.7% 82.0%, indicating improved diagnostically regions.
Language: Английский