PFFNet: A pyramid feature fusion network for microaneurysm segmentation in fundus images DOI Creative Commons

Jiaxin Lu,

Beiji Zou,

Xiaoxia Xiao

и другие.

IET Image Processing, Год журнала: 2024, Номер unknown

Опубликована: Ноя. 27, 2024

Abstract Retinal microaneurysm (MA) is a definite earliest clinical sigh of diabetic retinopathy (DR). Its automatic segmentation key to realizing intelligent screening for early DR, which can significantly reduce the risk visual impairment in patients. However, minute scale and subtle contrast MAs against background pose challenges segmentation. This paper focuses on MA fundus images. A novel pyramid feature fusion network (PFFNet) that progressively develops fuses rich contextual information by integrating two modules proposed. Multiple global scene parsing (GPSP) are introduced between encoder decoder provide diverse through reconstructing skip connections. Additionally, spatial scale‐aware (SSAP) module dynamically fuse multi‐scale information. will help identify from low‐contrast background. Furthermore, mitigate issue related category imbalance, combo loss function introduced. Finally, validate effectiveness proposed method, experiments conducted publicly available datasets, IDRiD DDR, PFFNet compared with several state‐of‐the‐art models. The experimental results demonstrate superiority our task.

Язык: Английский

An ensemble approach of deep CNN models with Beta normalization aggregation for gastrointestinal disease detection DOI

Zafran Waheed,

Jinsong Gui,

Kamran Amjad

и другие.

Biomedical Signal Processing and Control, Год журнала: 2025, Номер 105, С. 107567 - 107567

Опубликована: Фев. 4, 2025

Язык: Английский

Процитировано

1

Alternate encoder and dual decoder CNN-Transformer networks for medical image segmentation DOI Creative Commons
Lin Zhang, Xinyu Guo,

Hongkun Sun

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 14, 2025

Accurately extracting lesions from medical images is a fundamental but challenging problem in image analysis. In recent years, methods based on convolutional neural networks and Transformer have achieved great success the segmentation field. Combining powerful perception of local information by CNNs efficient capture global context crucial for segmentation. However, unique characteristics many lesion tissues often lead to poor performance most previous models failed fully extract effective features. Therefore, an encoder-decoder architecture, we propose novel alternate encoder dual decoder CNN-Transformer network, AD2Former, with two attractive designs: 1) We alternating learning can achieve real-time interaction between information, allowing both mutually guide learning. 2) architecture. The way dual-branch independent decoding fusion. To efficiently fuse different feature sub-decoders during decoding, introduce channel attention module reduce redundant information. Driven these designs, AD2Former demonstrates strong ability target regions fuzzy boundaries. Experiments multi-organ skin datasets also demonstrate effectiveness superiority AD2Former.

Язык: Английский

Процитировано

0

Monocular depth estimation via a detail semantic collaborative network for indoor scenes DOI Creative Commons
Wen Song, Xu Cui, Yakun Xie

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 31, 2025

Monocular image depth estimation is crucial for indoor scene reconstruction, and it plays a significant role in optimizing building energy efficiency, environment modeling, smart space design. However, the small variability of scenes leads to weakly distinguishable detail features. Meanwhile, there are diverse types objects, expression correlation among different objects complicated. Additionally, robustness recent models still needs further improvement given these environments. To address problems, detail‒semantic collaborative network (DSCNet) proposed monocular scenes. First, contextual features contained images fully captured via hierarchical transformer structure. Second, structure established, which establishes selective attention feature map extract details semantic information from maps. The extracted subsequently fused improve perception ability network. Finally, complex addressed by aggregating detailed at levels, model accuracy effectively improved without increasing number parameters. tested on NYU SUN datasets. approach produces state-of-the-art results compared with 14 performance optimal methods. In addition, discussed analyzed terms stability, robustness, ablation experiments availability

Язык: Английский

Процитировано

0

FastUGI-Net: Enhanced real-time endoscopic diagnosis with efficient multi-task learning DOI
In Neng Chan, Pak Kin Wong, Tao Yan

и другие.

Expert Systems with Applications, Год журнала: 2025, Номер unknown, С. 127444 - 127444

Опубликована: Апрель 1, 2025

Язык: Английский

Процитировано

0

A lighter hybrid feature fusion framework for polyp segmentation DOI Creative Commons
Xueqiu He,

Luo Yonggang,

Min Liu

и другие.

Scientific Reports, Год журнала: 2024, Номер 14(1)

Опубликована: Окт. 5, 2024

Colonoscopy is widely recognized as the most effective method for detection of colon polyps, which crucial early screening colorectal cancer. Polyp identification and segmentation in colonoscopy images require specialized medical knowledge are often labor-intensive expensive. Deep learning provides an intelligent efficient approach polyp segmentation. However, variability size heterogeneity boundaries interiors pose challenges accurate Currently, Transformer-based methods have become a mainstream trend these tend to overlook local details due inherent characteristics Transformer, leading inferior results. Moreover, computational burden brought by self-attention mechanisms hinders practical application models. To address issues, we propose novel CNN-Transformer hybrid model (CTHP). CTHP combines strengths CNN, excels at modeling information, global semantics, enhance accuracy. We transform computation over entire feature map into width height directions, significantly improving efficiency. Additionally, design new information propagation module introduce additional positional bias coefficients during attention process, reduces dispersal introduced deep mixed fusion Transformer. Extensive experimental results demonstrate that our proposed achieves state-of-the-art performance on multiple benchmark datasets Furthermore, cross-domain generalization experiments show exhibits excellent performance.

Язык: Английский

Процитировано

0

Automated lesion detection in gastrointestinal endoscopic images: leveraging deep belief networks and genetic algorithm-based Segmentation DOI
Mousa Alhajlah

Multimedia Tools and Applications, Год журнала: 2024, Номер unknown

Опубликована: Ноя. 23, 2024

Язык: Английский

Процитировано

0

PFFNet: A pyramid feature fusion network for microaneurysm segmentation in fundus images DOI Creative Commons

Jiaxin Lu,

Beiji Zou,

Xiaoxia Xiao

и другие.

IET Image Processing, Год журнала: 2024, Номер unknown

Опубликована: Ноя. 27, 2024

Abstract Retinal microaneurysm (MA) is a definite earliest clinical sigh of diabetic retinopathy (DR). Its automatic segmentation key to realizing intelligent screening for early DR, which can significantly reduce the risk visual impairment in patients. However, minute scale and subtle contrast MAs against background pose challenges segmentation. This paper focuses on MA fundus images. A novel pyramid feature fusion network (PFFNet) that progressively develops fuses rich contextual information by integrating two modules proposed. Multiple global scene parsing (GPSP) are introduced between encoder decoder provide diverse through reconstructing skip connections. Additionally, spatial scale‐aware (SSAP) module dynamically fuse multi‐scale information. will help identify from low‐contrast background. Furthermore, mitigate issue related category imbalance, combo loss function introduced. Finally, validate effectiveness proposed method, experiments conducted publicly available datasets, IDRiD DDR, PFFNet compared with several state‐of‐the‐art models. The experimental results demonstrate superiority our task.

Язык: Английский

Процитировано

0