Interactive exploration of CNN interpretability via coalitional game theory DOI Creative Commons
Lei Yang, Lijun Lu, Chao Liu

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 18, 2025

Convolutional neural network (CNN) has been widely used in image classification tasks. Neuron feature visualization techniques can generate intuitive images to depict features extracted by neurons, helping users interpret the working mechanism of a CNN. However, CNN model commonly numerous neurons. Manually reviewing all neurons' visualizations is exhaustive, thereby causing low efficiency interpretability exploration. Inspired SHapley Additive exPlanation (SHAP) method Coalitional Game Theory, quantified metric called Interpretive Metric (NeuronIM) proposed assess expression ability neuron calculating similarity between and SHAP neuron. Thus, rapidly identify important neurons A layer interpretive (LayerIM) two interactive interfaces are based on NeuronIM LayerIM. The LayerIM convolution averaging values layer. display diverse explanatory information multiple views provide with rich interactions efficiently accomplish exploration pruning experiment use cases were conducted demonstrate effectiveness metrics interfaces.

Язык: Английский

Interactive exploration of CNN interpretability via coalitional game theory DOI Creative Commons
Lei Yang, Lijun Lu, Chao Liu

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 18, 2025

Convolutional neural network (CNN) has been widely used in image classification tasks. Neuron feature visualization techniques can generate intuitive images to depict features extracted by neurons, helping users interpret the working mechanism of a CNN. However, CNN model commonly numerous neurons. Manually reviewing all neurons' visualizations is exhaustive, thereby causing low efficiency interpretability exploration. Inspired SHapley Additive exPlanation (SHAP) method Coalitional Game Theory, quantified metric called Interpretive Metric (NeuronIM) proposed assess expression ability neuron calculating similarity between and SHAP neuron. Thus, rapidly identify important neurons A layer interpretive (LayerIM) two interactive interfaces are based on NeuronIM LayerIM. The LayerIM convolution averaging values layer. display diverse explanatory information multiple views provide with rich interactions efficiently accomplish exploration pruning experiment use cases were conducted demonstrate effectiveness metrics interfaces.

Язык: Английский

Процитировано

0