A Light-Weight Universal Medical Segmentation Network for Laptops Based on Knowledge Distillation DOI
Songxiao Yang, Yizhou Li,

Ye Chen

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 83 - 100

Published: Jan. 1, 2025

Language: Английский

Segment anything model for medical image analysis: An experimental study DOI Creative Commons
Maciej A. Mazurowski, Haoyu Dong,

Hanxue Gu

et al.

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 89, P. 102918 - 102918

Published: Aug. 3, 2023

Language: Английский

Citations

328

A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions DOI Creative Commons
Sergios Gatidis, Tobias Hepp,

Marcel Früh

et al.

Scientific Data, Journal Year: 2022, Volume and Issue: 9(1)

Published: Oct. 4, 2022

We describe a publicly available dataset of annotated Positron Emission Tomography/Computed Tomography (PET/CT) studies. 1014 whole body Fluorodeoxyglucose (FDG)-PET/CT datasets (501 studies patients with malignant lymphoma, melanoma and non small cell lung cancer (NSCLC) 513 without PET-positive lesions (negative controls)) acquired between 2014 2018 were included. All examinations on single, state-of-the-art PET/CT scanner. The imaging protocol consisted whole-body FDG-PET acquisition corresponding diagnostic CT scan. FDG-avid identified as based the clinical report manually segmented PET images in slice-per-slice (3D) manner. provide anonymized original DICOM files all well segmentation masks. In addition, we scripts for image processing conversion to different file formats (NIfTI, mha, hdf5). Primary diagnosis, age sex are provided non-imaging information. demonstrate how this can be used deep learning-based automated analysis data trained learning model.

Language: Английский

Citations

114

On the challenges and perspectives of foundation models for medical image analysis DOI
Shaoting Zhang, Dimitris Metaxas

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 91, P. 102996 - 102996

Published: Oct. 12, 2023

Language: Английский

Citations

72

Deep learning based synthesis of MRI, CT and PET: Review and analysis DOI Creative Commons
Sanuwani Dayarathna, Kh Tohidul Islam, Sergio Uribe

et al.

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 92, P. 103046 - 103046

Published: Dec. 1, 2023

Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple modalities for an accurate workflow. This approach proves beneficial estimating desired modality from given source among most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission (PET). However, translating between two presents difficulties due complex non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance synthetic contrast applications compared conventional methods. survey comprehensively reviews deep translation 2018 2023 on pseudo-CT, MR, PET. We provide overview contrasts frequently employed learning networks synthesis. Additionally, we conduct detailed analysis each method, focusing their diverse model designs based input domains network architectures. also analyse novel architectures, ranging CNNs recent Transformer Diffusion models. includes comparing loss functions, available datasets anatomical regions, quality assessments other downstream tasks. Finally, discuss identify solutions within literature, suggesting possible future directions. hope that insights offered this paper will serve valuable roadmap researchers field

Language: Английский

Citations

55

TMTV-Net: fully automated total metabolic tumor volume segmentation in lymphoma PET/CT images — a multi-center generalizability analysis DOI
Fereshteh Yousefirizi, Ivan S. Klyuzhin,

Joo Hyun O

et al.

European Journal of Nuclear Medicine and Molecular Imaging, Journal Year: 2024, Volume and Issue: 51(7), P. 1937 - 1954

Published: Feb. 8, 2024

Language: Английский

Citations

21

Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT DOI
Vincent Andrearczyk, Valentin Oreiller,

Moamen Abobakr

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 1 - 30

Published: Jan. 1, 2023

Language: Английский

Citations

41

The autoPET challenge: Towards fully automated lesion segmentation in oncologic PET/CT imaging DOI Creative Commons
Sergios Gatidis,

Marcel Früh,

Matthias P. Fabritius

et al.

Research Square (Research Square), Journal Year: 2023, Volume and Issue: unknown

Published: June 14, 2023

Abstract We describe the results of autoPET challenge, a biomedical image analysis challenge aimed to motivate and focus research in field automated whole-body PET/CT analysis. The task was segmentation metabolically active tumor lesions on FDG-PET/CT. Challenge participants had access one largest publicly available annotated data sets for algorithm training. Over 350 teams from all continents registered challenge; seven best-performing contributions were awarded at MICCAI annual meeting 2022. Based we conclude that lesion is feasible with high accuracy using state-of-the-art deep learning methods. observed performance this may primarily rely quality quantity input less technical details underlying architecture. Future iterations will clinical translation.

Language: Английский

Citations

31

Deep Semisupervised Transfer Learning for Fully Automated Whole-Body Tumor Quantification and Prognosis of Cancer on PET/CT DOI Creative Commons
Kevin Leung, Steven P. Rowe,

Moe S. Sadaghiani

et al.

Journal of Nuclear Medicine, Journal Year: 2024, Volume and Issue: 65(4), P. 643 - 650

Published: Feb. 29, 2024

Automatic detection and characterization of cancer are important clinical needs to optimize early treatment. We developed a deep, semisupervised transfer learning approach for fully automated, whole-body tumor segmentation prognosis on PET/CT. Methods: This retrospective study consisted 611 18F-FDG PET/CT scans patients with lung cancer, melanoma, lymphoma, head neck breast 408 prostate-specific membrane antigen (PSMA) prostate cancer. The had nnU-net backbone learned the task PSMA images using limited annotations radiomics analysis. True-positive rate Dice similarity coefficient were assessed evaluate performance. Prognostic models imaging measures extracted from predicted segmentations perform risk stratification based follow-up levels, survival estimation by Kaplan–Meier method Cox regression analysis, pathologic complete response prediction after neoadjuvant chemotherapy. Overall accuracy area under receiver-operating-characteristic (AUC) curve assessed. Results: Our yielded median true-positive rates 0.75, 0.85, 0.87, 0.75 coefficients 0.81, 0.76, 0.83, 0.73 respectively, task. model an overall 0.83 AUC 0.86. Patients classified as low- intermediate- high-risk mean levels 18.61 727.46 ng/mL, respectively (P < 0.05). score was significantly associated univariable multivariable analyses Predictive only pretherapy both pre- posttherapy accuracies 0.72 0.84 AUCs respectively. Conclusion: proposed demonstrated accurate in across 6 types scans.

Language: Английский

Citations

16

Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via Volumetric Pseudo-Labeling DOI Creative Commons
Constantin Seibold, Alexander Jaus, Matthias A. Fink

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Jan. 5, 2024

Abstract Purpose : Interpreting chest radiographs (CXR) remains challenging due to the ambiguity of overlapping structures such as lungs, heart, and bones. To address this issue, we propose a novel method for extracting fine-grained anatom- ical in CXR using pseudo-labeling three-dimensional computed tomography (CT) scans. Methods We created large-scale dataset 10,021 thoracic CTs with 157 labels applied an ensemble 3D anatomy segmentation models extract anatomical pseudo-labels. These were projected onto two-dimensional plane, similar CXR, allowing training detailed semantic without any manual annotation effort. Results Our resulting demonstrated remarkable performance on high average model-annotator agreement between two radiologists mIoU scores 0.93 0.85 frontal lateral anatomy, while inter-annotator remained at 0.95 0.83 mIoU. segmentations allowed accurate extraction relevant explainable medical features cardio-thoracic-ratio. Conclusion volumetric paired CT projection offers promising approach human annotators. This technique may have important clinical implications, particularly analysis various pathologies.

Language: Английский

Citations

10

MedLSAM: Localize and segment anything model for 3D CT images DOI
Wenhui Lei, Wei Xu, Kang Li

et al.

Medical Image Analysis, Journal Year: 2024, Volume and Issue: 99, P. 103370 - 103370

Published: Oct. 15, 2024

Language: Английский

Citations

9