Multi-Conv attention network for skin lesion image segmentation DOI Creative Commons
Zexin Li, Hanchen Wang, Haoyu Chen

et al.

Frontiers in Physics, Journal Year: 2024, Volume and Issue: 12

Published: Dec. 20, 2024

To address the trade-off between segmentation performance and model lightweighting in computer-aided skin lesion segmentation, this paper proposes a lightweight network architecture, Multi-Conv Attention Network (MCAN). The consists of two key modules: ISDConv (Inception-Split Depth Convolution) AEAM (Adaptive Enhanced Module). reduces computational complexity by decomposing large kernel depthwise convolutions into smaller unit mappings. module leverages dimensional decoupling, multi-semantic guidance, semantic discrepancy alleviation to facilitate synergy channel attention spatial attention, further exploiting redundancy feature maps. With these improvements, proposed method achieves balance efficiency. Experimental results demonstrate that MCAN state-of-the-art on mainstream datasets, validating its effectiveness.

Language: Английский

MugenNet: A Novel Combined Convolution Neural Network and Transformer Network with Application in Colonic Polyp Image Segmentation DOI Creative Commons
Chen Peng, Zhiqin Qian,

Kunyu Wang

et al.

Sensors, Journal Year: 2024, Volume and Issue: 24(23), P. 7473 - 7473

Published: Nov. 23, 2024

Accurate polyp image segmentation is of great significance, because it can help in the detection polyps. Convolutional neural network (CNN) a common automatic method, but its main disadvantage long training time. Transformer another method that be adapted to by employing self-attention mechanism, which essentially assigns different importance weights each piece information, thus achieving high computational efficiency during segmentation. However, potential drawback with risk information loss. The study reported this paper employed well-known hybridization principle propose combine CNN and retain strengths both. Specifically, applied early colonic polyps implement model called MugenNet for We conducted comprehensive experiment compare other models on five publicly available datasets. An ablation was as well. experimental results showed achieve mean Dice 0.714 ETIS dataset, optimal performance dataset compared models, an inference speed 56 FPS. overall outcome optimally two methods machine learning are complementary other.

Language: Английский

Citations

5

Neural Memory Self-Supervised State Space Models With Learnable Gates DOI
Zhihua Wang, Yuxin He, Yi Zhang

et al.

IEEE Signal Processing Letters, Journal Year: 2025, Volume and Issue: 32, P. 926 - 930

Published: Jan. 1, 2025

Language: Английский

Citations

0

MSPMformer: The Fusion of Transformers and Multi-Scale Perception Modules Skin Lesion Segmentation Algorithm DOI Creative Commons
Guoliang Yang, Zhen Geng,

Qianchen Wang

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 128602 - 128617

Published: Jan. 1, 2024

Language: Английский

Citations

0

Multi-Conv attention network for skin lesion image segmentation DOI Creative Commons
Zexin Li, Hanchen Wang, Haoyu Chen

et al.

Frontiers in Physics, Journal Year: 2024, Volume and Issue: 12

Published: Dec. 20, 2024

To address the trade-off between segmentation performance and model lightweighting in computer-aided skin lesion segmentation, this paper proposes a lightweight network architecture, Multi-Conv Attention Network (MCAN). The consists of two key modules: ISDConv (Inception-Split Depth Convolution) AEAM (Adaptive Enhanced Module). reduces computational complexity by decomposing large kernel depthwise convolutions into smaller unit mappings. module leverages dimensional decoupling, multi-semantic guidance, semantic discrepancy alleviation to facilitate synergy channel attention spatial attention, further exploiting redundancy feature maps. With these improvements, proposed method achieves balance efficiency. Experimental results demonstrate that MCAN state-of-the-art on mainstream datasets, validating its effectiveness.

Language: Английский

Citations

0