IEEE Transactions on Biomedical Engineering, Год журнала: 2023, Номер 70(9), С. 2722 - 2732
Опубликована: Апрель 4, 2023
Язык: Английский
IEEE Transactions on Biomedical Engineering, Год журнала: 2023, Номер 70(9), С. 2722 - 2732
Опубликована: Апрель 4, 2023
Язык: Английский
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Год журнала: 2024, Номер unknown
Опубликована: Янв. 3, 2024
Efficient polyp segmentation in healthcare plays a critical role enabling early diagnosis of colorectal cancer. However, the polyps presents numerous challenges, including intricate distribution backgrounds, variations sizes and shapes, indistinct boundaries. Defining boundary between foreground (i.e. itself) background (surrounding tissue) is difficult. To mitigate these we propose Multi-Scale Edge-Guided Attention Network (MEGANet) tailored specifically for within colonoscopy images. This network draws inspiration from fusion classical edge detection technique with an attention mechanism. By combining techniques, MEGANet effectively preserves high-frequency information, notably edges boundaries, which tend to erode as neural networks deepen. designed end-to-end framework, encompassing three key modules: encoder, responsible capturing abstracting features input image, decoder, focuses on salient features, module (EGA) that employs Laplacian Operator accentuate Extensive experiments, both qualitative quantitative, five benchmark datasets, demonstrate our outperforms other existing SOTA methods under six evaluation metrics. Our code available at https://github.com/UARK-AICV/MEGANet.
Язык: Английский
Процитировано
27IEEE Transactions on Cybernetics, Год журнала: 2024, Номер 54(9), С. 5040 - 5053
Опубликована: Март 18, 2024
Segmenting polyps from colonoscopy images is very important in clinical practice since it provides valuable information for colorectal cancer. However, polyp segmentation remains a challenging task as have camouflage properties and vary greatly size. Although many methods been recently proposed produced remarkable results, most of them cannot yield stable results due to the lack features with distinguishing those high-level semantic details. Therefore, we novel framework called contrastive Transformer network (CTNet), three key components backbone, self-multiscale interaction module (SMIM), collection (CIM), which has excellent learning generalization abilities. The long-range dependence highly structured feature map space obtained by CTNet through can effectively localize properties. benefits multiscale high-resolution maps SMIM CIM, respectively, thus obtain accurate different sizes. Without bells whistles, yields significant gains 2.3%, 3.7%, 18.2%, 10.1% over classical method PraNet on Kvasir-SEG, CVC-ClinicDB, Endoscene, ETIS-LaribPolypDB, CVC-ColonDB respectively. In addition, advantages camouflaged object detection defect detection. code available at https://github.com/Fhujinwu/CTNet.
Язык: Английский
Процитировано
18Lecture notes in computer science, Год журнала: 2022, Номер unknown, С. 99 - 109
Опубликована: Янв. 1, 2022
Язык: Английский
Процитировано
66IEEE Transactions on Neural Networks and Learning Systems, Год журнала: 2022, Номер 35(4), С. 5355 - 5366
Опубликована: Сен. 19, 2022
The precise segmentation of medical images is one the key challenges in pathology research and clinical practice. However, many image tasks have problems such as large differences between different types lesions similar shapes well colors surrounding tissues, which seriously affects improvement accuracy. In this article, a novel method called Swin Pyramid Aggregation network (SwinPA-Net) proposed by combining two designed modules with Transformer to learn more powerful robust features. modules, named dense multiplicative connection (DMC) module local pyramid attention (LPA) module, are aggregate multiscale context information images. DMC cascades semantic feature through fusion, minimizes interference shallow background noise improve expression solves problem excessive variation lesion size type. Moreover, LPA guides focus on region interest merging global attention, helps solve problems. evaluated public benchmark datasets for polyp task skin private dataset laparoscopic task. Compared existing state-of-the-art (SOTA) methods, SwinPA-Net achieves most advanced performance can outperform second-best mean Dice score 1.68%, 0.8%, 1.2% three tasks, respectively.
Язык: Английский
Процитировано
57IEEE Transactions on Medical Imaging, Год журнала: 2023, Номер 42(12), С. 3987 - 4000
Опубликована: Сен. 28, 2023
Polyps are very common abnormalities in human gastrointestinal regions. Their early diagnosis may help reducing the risk of colorectal cancer. Vision-based computer-aided diagnostic systems automatically identify polyp regions to assist surgeons their removal. Due varying shape, color, size, texture, and unclear boundaries, segmentation images is a challenging problem. Existing deep learning models mostly rely on convolutional neural networks that have certain limitations diversity visual patterns at different spatial locations. Further, they fail capture inter-feature dependencies. Vision transformer also been deployed for due powerful global feature extraction capabilities. But too supplemented by convolution layers contextual local information. In present paper, model CoInNet proposed with novel mechanism leverages strengths involution operations learns highlight considering relationship between maps through statistical attention unit. To further aid network an anomaly boundary approximation module introduced uses recursively fed fusion refine results. It indeed remarkable even tiny-sized polyps only 0.01% image area can be precisely segmented CoInNet. crucial clinical applications, as small easily overlooked manual examination voluminous size wireless capsule endoscopy videos. outperforms thirteen state-of-the-art methods five benchmark datasets.
Язык: Английский
Процитировано
35IEEE Transactions on Circuits and Systems for Video Technology, Год журнала: 2024, Номер 34(8), С. 7440 - 7453
Опубликована: Фев. 26, 2024
Medical image segmentation is an essential process to assist clinics with computer-aided diagnosis and treatment. Recently, a large amount of convolutional neural network (CNN)-based methods have been rapidly developed achieved remarkable performances in several different medical tasks. However, the same type infected region or lesions often has diversity scales, making it challenging task achieve accurate segmentation. In this paper, we present novel Uncertainty-aware Hierarchical Aggregation Network, namely UHA-Net, for segmentation, which can fully make utilization cross-level multi-scale features handle scale variations. Specifically, propose hierarchical feature fusion (HFF) module aggregate high-level features, used produce global map coarse localization segmented target. Then, uncertainty-induced (UCF) fuse from adjacent levels, learn knowledge guidance capture contextual information resolutions. Further, aggregation (SAM) presented by using convolution kernels, effectively deal At last, formulate unified framework simultaneously inter-layer discriminability representations intra-layer leading results. We carry out experiments on three tasks, results demonstrate that our UHA-Net outperforms state-of-the-art methods. Our implementation code maps will be publicly at https://github.com/taozh2017/UHANet.
Язык: Английский
Процитировано
12Biomedical Signal Processing and Control, Год журнала: 2024, Номер 95, С. 106336 - 106336
Опубликована: Апрель 21, 2024
Язык: Английский
Процитировано
10Nature Communications, Год журнала: 2025, Номер 16(1)
Опубликована: Апрель 8, 2025
The rapid adoption of Artificial Intelligence (AI) in medical imaging raises fairness and privacy concerns across demographic groups, especially diagnosis treatment decisions. While federated learning (FL) offers decentralized preservation, current frameworks often prioritize collaboration over group fairness, risking healthcare disparities. Here we present FlexFair, an innovative FL framework designed to address both challenges. FlexFair incorporates a flexible regularization term facilitate the integration multiple criteria, including equal accuracy, parity, opportunity. Evaluated four clinical applications (polyp segmentation, fundus vascular cervical cancer skin disease diagnosis), outperforms state-of-the-art methods accuracy. Moreover, curate multi-center dataset for segmentation that includes 678 patients from hospitals. This diverse allows more comprehensive analysis model performance different population ensuring findings are applicable broader range patients.
Язык: Английский
Процитировано
1IEEE Transactions on Image Processing, Год журнала: 2022, Номер 31, С. 6649 - 6663
Опубликована: Янв. 1, 2022
Recent research advances in salient object detection (SOD) could largely be attributed to ever-stronger multi-scale feature representation empowered by the deep learning technologies. The existing SOD models extract features via off-the-shelf encoders and combine them smartly various delicate decoders. However, kernel sizes this commonly-used thread are usually "fixed". In our new experiments, we have observed that kernels of small size preferable scenarios containing tiny objects. contrast, large perform better for images with Inspired observation, advocate "dynamic" scale routing (as a brand-new idea) paper. It will result generic plug-in directly fit backbone. This paper's key technical innovations two-fold. First, instead using vanilla convolution fixed encoder design, propose dynamic pyramid (DPConv), which dynamically selects best-suited w.r.t. given input. Second, provide self-adaptive bidirectional decoder design accommodate DPConv-based best. most significant highlight is its capability between scales their collection, making inference process scale-aware. As result, paper continues enhance current SOTA performance. Both code dataset publicly available at https://github.com/wuzhenyubuaa/DPNet.
Язык: Английский
Процитировано
33Biomedical Signal Processing and Control, Год журнала: 2023, Номер 83, С. 104593 - 104593
Опубликована: Янв. 29, 2023
Язык: Английский
Процитировано
23