分割一切模型(SAM)在医学图像分割中的应用 DOI

吴曈 Wu Tong,

胡浩基 Hu Haoji,

冯洋 Feng Yang

et al.

Chinese Journal of Lasers, Journal Year: 2024, Volume and Issue: 51(21), P. 2107102 - 2107102

Published: Jan. 1, 2024

LeSAM: Adapt Segment Anything Model for Medical Lesion Segmentation DOI
Yunbo Gu, Qianyu Wu, Hui Tang

et al.

IEEE Journal of Biomedical and Health Informatics, Journal Year: 2024, Volume and Issue: 28(10), P. 6031 - 6041

Published: May 29, 2024

The Segment Anything Model (SAM) is a foundational model that has demonstrated impressive results in the field of natural image segmentation. However, its performance remains suboptimal for medical segmentation, particularly when delineating lesions with irregular shapes and low contrast. This can be attributed to significant domain gap between images on which SAM was originally trained. In this paper, we propose an adaptation specifically tailored lesion segmentation termed LeSAM. LeSAM first learns medical-specific knowledge through efficient module integrates it general obtained from pre-trained SAM. Subsequently, leverage merged generate masks using modified mask decoder implemented as lightweight U-shaped network design. modification enables better delineation boundaries while facilitating ease training. We conduct comprehensive experiments various tasks involving different modalities such CT scans, MRI ultrasound images, dermoscopic endoscopic images. Our proposed method achieves superior compared previous state-of-the-art methods 8 out 12 achieving competitive remaining 4 datasets. Additionally, ablation studies are conducted validate effectiveness our modules decoder.

Language: Английский

Citations

4

Knowledge-guided classification and regression surrogates co-assisted multi-objective soft subspace clustering algorithm DOI
Feng Zhao, Lu Li, Hanqiang Liu

et al.

Applied Intelligence, Journal Year: 2025, Volume and Issue: 55(6)

Published: Feb. 15, 2025

Language: Английский

Citations

0

The increasing role of artificial intelligence in radiation oncology: how should we navigate it? DOI Creative Commons
Florian Putz, Rainer Fietkau

Strahlentherapie und Onkologie, Journal Year: 2025, Volume and Issue: 201(3), P. 207 - 209

Published: Feb. 19, 2025

Language: Английский

Citations

0

Gender parity in radiation oncology in Germany: a 2024 analysis of professional roles and academic training DOI

Angela Besserer,

Tina Jost, Andrea Wittig

et al.

Strahlentherapie und Onkologie, Journal Year: 2025, Volume and Issue: unknown

Published: April 8, 2025

Language: Английский

Citations

0

An Experimental Survey of Incremental Transfer Learning for Multicenter Collaboration DOI Creative Commons
Yixing Huang, Christoph Bert, Ahmed M. Gomaa

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 101210 - 101227

Published: Jan. 1, 2024

Due to data privacy constraints, sharing among multiple clinical centers is restricted, which impedes the development of high performance deep learning models from multicenter collaboration. Naive weight transfer methods share intermediate model weights without raw and hence can bypass restrictions. However, drops are typically observed when transferred one center next because forgetting problem. Incremental learning, combines peer-to-peer federated domain incremental overcome issue meanwhile preserve by using continual techniques. In this work, a conventional domain/task framework adapted for learning. A survey on efficacy prevalent regularization-based collaboration performed. The influences heterogeneity, classifier head setting, network optimizer, initialization, order, type have been investigated thoroughly. Our publicly accessible research community further development.

Language: Английский

Citations

2

Repurposing traditional U-Net predictions for sparse SAM prompting in medical image segmentation DOI
Zachery Morton Colbert, Daniel Arrington, Matthew Foote

et al.

Biomedical Physics & Engineering Express, Journal Year: 2023, Volume and Issue: 10(2), P. 025004 - 025004

Published: Dec. 20, 2023

Objective:Automated medical image segmentation (MIS) using deep learning has traditionally relied on models built and trained from scratch, or at least fine-tuned a target dataset. The Segment Anything Model (SAM) by Meta challenges this paradigm providing zero-shot generalisation capabilities. This study aims to develop compare methods for refining traditional U-Net segmentations repurposing them automated SAM prompting.Approach:A 2D with EfficientNet-B4 encoder was 4-fold cross-validation an in-house brain metastases Segmentation predictions each validation set were used automatic sparse prompt generation via bounding box prompting method (BBPM) novel implementations of the point (PPM). PPMs frequently produced poor slice (PSPs) that required identification substitution. A identified as PSP if it (1) contained multiple predicted regions per lesion (2) possessed outlier foreground pixel counts relative patient's other slices. Each substituted corresponding initial BBPM prediction. patients' mean volumetric dice similarity coefficient (DSC) evaluate methods' performances.Main results:Relative segmentations, improved patient DSC 3.93 ± 1.48% 0.847 0.008 DSC. PSPs constituted 20.01-21.63% PPMs' without substitution performance dropped 82.94 3.17% 0.139 0.023 Pairing two techniques yielded sensitivity 92.95 1.20%. By combining approach prediction substitution, achieved accuracies par BBPM, improving up 4.17 1.40% reaching 0.849 0.007 DSC.Significance:The proposed bridge gap between PPM MIS. Additionally, uniformity observed in our experiments' results demonstrates robustness variations style. These findings can assist design both automatically manually prompted pipelines.

Language: Английский

Citations

2

Cervical‐YOSA: Utilizing prompt engineering and pre‐trained large‐scale models for automated segmentation of multi‐sequence MRI images in cervical cancer DOI Creative Commons

Yanwei Xia,

Zhengjie Ou,

Lihua Tan

et al.

IET Image Processing, Journal Year: 2024, Volume and Issue: 18(12), P. 3556 - 3569

Published: Sept. 5, 2024

Abstract Cervical cancer is a major health concern, particularly in developing countries with limited medical resources. This study introduces two models aimed at improving cervical tumor segmentation: semi‐automatic model that fine‐tunes the Segment Anything Model (SAM) and fully automated designed for efficiency. Evaluations were conducted using dataset of 8586 magnetic resonance imaging (MRI) slices, where achieved Dice Similarity Coefficient (DSC) 0.9097, demonstrating high accuracy. The also performed robustly DSC 0.8526, outperforming existing methods. These offer significant potential to enhance diagnosis treatment, especially resource‐limited settings.

Language: Английский

Citations

0

分割一切模型(SAM)在医学图像分割中的应用 DOI

吴曈 Wu Tong,

胡浩基 Hu Haoji,

冯洋 Feng Yang

et al.

Chinese Journal of Lasers, Journal Year: 2024, Volume and Issue: 51(21), P. 2107102 - 2107102

Published: Jan. 1, 2024

Citations

0