IEEE Transactions on Biomedical Engineering, Год журнала: 2023, Номер 70(9), С. 2722 - 2732
Опубликована: Апрель 4, 2023
Язык: Английский
IEEE Transactions on Biomedical Engineering, Год журнала: 2023, Номер 70(9), С. 2722 - 2732
Опубликована: Апрель 4, 2023
Язык: Английский
arXiv (Cornell University), Год журнала: 2021, Номер unknown
Опубликована: Янв. 1, 2021
Most polyp segmentation methods use CNNs as their backbone, leading to two key issues when exchanging information between the encoder and decoder: 1) taking into account differences in contribution different-level features 2) designing an effective mechanism for fusing these features. Unlike existing CNN-based methods, we adopt a transformer encoder, which learns more powerful robust representations. In addition, considering image acquisition influence elusive properties of polyps, introduce three standard modules, including cascaded fusion module (CFM), camouflage identification (CIM), similarity aggregation (SAM). Among these, CFM is used collect semantic location polyps from high-level features; CIM applied capture disguised low-level features, SAM extends pixel area with position entire area, thereby effectively cross-level The proposed model, named Polyp-PVT, suppresses noises significantly improves expressive capabilities. Extensive experiments on five widely adopted datasets show that model various challenging situations (e.g., appearance changes, small objects, rotation) than representative methods. available at https://github.com/DengPingFan/Polyp-PVT.
Язык: Английский
Процитировано
1202021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Год журнала: 2022, Номер unknown, С. 1150 - 1156
Опубликована: Дек. 6, 2022
Recently, some pioneering works have preferred applying more complex modules to improve segmentation performances. However, it is not friendly for actual clinical environments due limited computing resources. To address this challenge, we propose a light-weight model achieve competitive performances skin lesion at the lowest cost of parameters and computational complexity so far. Briefly, four modules: (1) DGA consists dilated convolution gated attention mechanisms extract global local feature information; (2) IEA, which based on external characterize overall datasets enhance connection between samples; (3) CAB composed 1D fully connected layers perform fusion multi-stage features generate maps channel axis; (4) SAB, operates by shared 2D spatial axis. We combine with our U-shape architecture obtain medical image dubbed as MALUNet. Compared UNet, improves mIoU DSC metrics 2.39% 1.49%, respectively, 44x 166x reduction in number complexity. In addition, conduct comparison experiments two (ISIC2017 ISIC2018). Experimental results show that achieves state-of-the-art balancing parameters, Code available https://github.com/JCruan519/MALUNet.
Язык: Английский
Процитировано
105Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 481 - 490
Опубликована: Янв. 1, 2023
Язык: Английский
Процитировано
98Deleted Journal, Год журнала: 2022, Номер 19(6), С. 531 - 549
Опубликована: Ноя. 3, 2022
Abstract We present the first comprehensive video polyp segmentation (VPS) study in deep learning era. Over years, developments VPS are not moving forward with ease due to lack of a large-scale dataset fine-grained annotations. To address this issue, we introduce high-quality frame-by-frame annotated dataset, named SUN-SEG, which contains 158 690 colonoscopy frames from well-known SUN-database. provide additional annotation covering diverse types, i.e., attribute, object mask, boundary, scribble, and polygon. Second, design simple but efficient baseline, PNS+, consists global encoder, local normalized self-attention (NS) blocks. The encoders receive an anchor frame multiple successive extract long-term short-term spatial-temporal representations, then progressively refined by two NS Extensive experiments show that PNS+ achieves best performance real-time inference speed (170 fps), making it promising solution for task. Third, extensively evaluate 13 representative polyp/object models on our SUN-SEG attribute-based comparisons. Finally, discuss several open issues suggest possible research directions community. Our project publicly available at https://github.com/GewelsJI/VPS .
Язык: Английский
Процитировано
91Scientific Reports, Год журнала: 2023, Номер 13(1)
Опубликована: Янв. 21, 2023
Detection of colorectal polyps through colonoscopy is an essential practice in prevention cancers. However, the method itself labor intensive and subject to human error. With advent deep learning-based methodologies, specifically convolutional neural networks, opportunity improve upon prognosis potential patients suffering with cancer has appeared automated detection segmentation polyps. Polyp a number problems such as model overfitting generalization, poor definition boundary pixels, well model's ability capture practical range textures, sizes, colors. In effort address these challenges, we propose dual encoder-decoder solution named Segmentation Network (PSNet). Both encoder decoder were developed by comprehensive combination variety learning modules, including PS encoder, transformer decoder, enhanced dilated partial merge module. PSNet outperforms state-of-the-art results extensive comparative study against 5 existing polyp datasets respect both mDice mIoU at 0.863 0.797, respectively. our new modified dataset obtain 0.941 0.897
Язык: Английский
Процитировано
64Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 343 - 356
Опубликована: Дек. 24, 2023
Язык: Английский
Процитировано
64IEEE Transactions on Medical Imaging, Год журнала: 2023, Номер 42(6), С. 1735 - 1745
Опубликована: Янв. 13, 2023
Skin
lesion
segmentation
from
dermoscopy
images
is
of
great
significance
in
the
quantitative
analysis
skin
cancers,
which
yet
challenging
even
for
dermatologists
due
to
inherent
issues,
i.e.,
considerable
size,
shape
and
color
variation,
ambiguous
boundaries.
Recent
vision
transformers
have
shown
promising
performance
handling
variation
through
global
context
modeling.
Still,
they
not
thoroughly
solved
problem
boundaries
as
ignore
complementary
usage
boundary
knowledge
contexts.
In
this
paper,
we
propose
a
novel
cross-scale
boundary-aware
transformer,
XBound-Former,
simultaneously
address
problems
segmentation.
XBound-Former
purely
attention-based
network
catches
via
three
specially
designed
learners.
First,
an
implicit
learner
(im-Bound)
constrain
attention
on
points
with
noticeable
enhancing
local
modeling
while
maintaining
context.
Second,
explicit
(ex-Bound)
extract
at
multiple
scales
convert
it
into
embeddings
explicitly.
Third,
based
learned
multi-scale
embeddings,
(X-Bound)
by
using
embedding
one
scale
guide
other
scales.
We
evaluate
model
two
datasets
polyp
dataset,
where
our
consistently
outperforms
convolution-
transformer-based
models,
especially
boundary-wise
metrics.
All
resources
could
be
found
Язык: Английский
Процитировано
61IEEE Transactions on Instrumentation and Measurement, Год журнала: 2023, Номер 72, С. 1 - 13
Опубликована: Янв. 1, 2023
Recently, deep convolutional neural networks (CNNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory performance. In this article, a novel attention-guided pyramid context network (APCNet) is proposed accurate robust Specifically, considering that represent aspects, APCNet first extracts multilayer features structure, then uses aggregation strategy to refine of each layer using complementary information layers. To obtain abundant features, extraction module (CEM) explores via local retainment global compaction. Through top-down supervision, our implements coarse-to-fine finally localizes region precisely. Extensive experiments on two in-domain four out-of-domain show comparable 19 state-of-the-art methods. Moreover, it holds more appropriate tradeoff between effectiveness computational complexity than these competing
Язык: Английский
Процитировано
45Computers in Biology and Medicine, Год журнала: 2023, Номер 159, С. 106960 - 106960
Опубликована: Апрель 20, 2023
Язык: Английский
Процитировано
44Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 335 - 346
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
31