Head and Neck Tumor Segmentation of MRI from Pre- and Mid-Radiotherapy with Pre-Training, Data Augmentation and Dual Flow UNet DOI Creative Commons

Litingyu Wang,

Wenjun Liao, Shichuan Zhang

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 75 - 86

Опубликована: Янв. 1, 2025

Язык: Английский

Joint EANM/SNMMI guideline on radiomics in nuclear medicine DOI Creative Commons
Mathieu Hatt, Aron K. Krizsan, Arman Rahmim

и другие.

European Journal of Nuclear Medicine and Molecular Imaging, Год журнала: 2022, Номер 50(2), С. 352 - 375

Опубликована: Ноя. 3, 2022

The purpose of this guideline is to provide comprehensive information on best practices for robust radiomics analyses both hand-crafted and deep learning-based approaches.

Язык: Английский

Процитировано

83

Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT DOI
Vincent Andrearczyk, Valentin Oreiller,

Moamen Abobakr

и другие.

Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 1 - 30

Опубликована: Янв. 1, 2023

Язык: Английский

Процитировано

42

Auto-segmentation of head and neck tumors in positron emission tomography images using non-local means and morphological frameworks DOI Open Access

Sahel Heydarheydari,

Mohammad Javad Tahmasebi Birgani, Seyed Masoud Rezaeijo

и другие.

Polish Journal of Radiology, Год журнала: 2023, Номер 88, С. 365 - 370

Опубликована: Авг. 14, 2023

Accurately segmenting head and neck cancer (HNC) tumors in medical images is crucial for effective treatment planning. However, current methods HNC segmentation are limited their accuracy efficiency. The present study aimed to design a model three-dimensional (3D) positron emission tomography (PET) using Non-Local Means (NLM) morphological operations.The proposed was tested data from the HECKTOR challenge public dataset, which included 408 patient with tumors. NLM utilized image noise reduction preservation of critical information. Following pre-processing, operations were used assess similarity intensity edge information within images. Dice score, Intersection Over Union (IoU), evaluate manual predicted results.The achieved an average score 81.47 ± 3.15, IoU 80 4.5, 94.03 4.44, demonstrating its effectiveness PET images.The algorithm provides capability produce patient-specific tumor without interaction, addressing limitations segmentation. has potential improve planning aid development personalized medicine. Additionally, this can be extended effectively segment other organs annotated

Язык: Английский

Процитировано

32

Gradient Map-Assisted Head and Neck Tumor Segmentation: A Pre-RT to Mid-RT Approach in MRI-Guided Radiotherapy DOI Creative Commons
Jintao Ren, Kim Hochreuter, Mathis Rasmussen

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 36 - 49

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

1

Deep learning aided oropharyngeal cancer segmentation with adaptive thresholding for predicted tumor probability in FDG PET and CT images DOI Creative Commons
Alessia de Biase, Nanna M. Sijtsema, Lisanne V. van Dijk

и другие.

Physics in Medicine and Biology, Год журнала: 2023, Номер 68(5), С. 055013 - 055013

Опубликована: Фев. 7, 2023

Tumor segmentation is a fundamental step for radiotherapy treatment planning. To define an accurate of the primary tumor (GTVp) oropharyngeal cancer patients (OPC), simultaneous assessment different image modalities needed, and each volume explored slice-by-slice from orientations. Moreover, manual fixed boundary neglects spatial uncertainty known to occur in delineation. This study proposes novel automatic deep learning (DL) model assist radiation oncologists adaptive GTVp on registered FDG PET/CT images. We included 138 OPC treated with (chemo)radiation our institute. Our DL framework exploits both inter intra-slice context. Sequences 3 consecutive 2D slices concatenated images contours were used as input. A 3-fold cross validation was performed three times, training sequences extracted Axial (A), Sagittal (S), Coronal (C) plane 113 patients. Since contain overlapping slices, slice resulted outcome predictions that averaged. In A, S, C planes, output shows areas probabilities predicting tumor. The performance models assessed 25 at probability thresholds using mean Dice Score Coefficient (DSC). Predictions closest ground truth threshold 0.9 (DSC 0.70 0.77 0.80 plane). promising results proposed show maps could guide segmentation.

Язык: Английский

Процитировано

22

Head and neck cancer treatment outcome prediction: a comparison between machine learning with conventional radiomics features and deep learning radiomics DOI Creative Commons
Bao Ngoc Huynh, Aurora Rosvoll Groendahl, Oliver Tomić

и другие.

Frontiers in Medicine, Год журнала: 2023, Номер 10

Опубликована: Авг. 30, 2023

Background Radiomics can provide in-depth characterization of cancers for treatment outcome prediction. Conventional radiomics rely on extraction image features within a pre-defined region interest (ROI) which are typically fed to classification algorithm prediction clinical endpoint. Deep learning allows simpler workflow where images be used directly as input convolutional neural network (CNN) with or without ROI. Purpose The purpose this study was evaluate (i) conventional and (ii) deep predicting overall survival (OS) disease-free (DFS) patients head neck squamous cell carcinoma (HNSCC) using pre-treatment 18 F-fluorodeoxuglucose positron emission tomography (FDG PET) computed (CT) images. Materials methods FDG PET/CT data HNSCC treated radio(chemo)therapy at Oslo University Hospital (OUS; n = 139) Maastricht Medical Center (MAASTRO; 99) were collected retrospectively. OUS model training initial evaluation. MAASTRO external testing assess cross-institutional generalizability. Models trained and/or features, feature selection, compared CNNs the gross tumor volume (GTV) included. Model performance measured accuracy, area under receiver operating characteristic curve (AUC), Matthew’s correlation coefficient (MCC), F1 score calculated both classes separately. Results achieved highest endpoints. Adding these image-based models increased further. including could achieve competitive performance. However, selection lead overfitting poor node contours close on-par contours. Conclusion High generalizability by combining data, medical together models. see potential use an screening tool high-risk patients.

Язык: Английский

Процитировано

19

Comparison of deep learning networks for fully automated head and neck tumor delineation on multi-centric PET/CT images DOI Creative Commons
Yiling Wang, Elia Lombardo, Lili Huang

и другие.

Radiation Oncology, Год журнала: 2024, Номер 19(1)

Опубликована: Янв. 8, 2024

Abstract Objectives Deep learning-based auto-segmentation of head and neck cancer (HNC) tumors is expected to have better reproducibility than manual delineation. Positron emission tomography (PET) computed (CT) are commonly used in tumor segmentation. However, current methods still face challenges handling whole-body scans where a selection bounding box may be required. Moreover, different institutions might apply guidelines for This study aimed at exploring the auto-localization segmentation HNC from entire PET/CT investigating transferability trained baseline models external real world cohorts. Methods We employed 2D Retina Unet find utilized regular segment union involved lymph nodes. In comparison, 2D/3D Unets were also implemented localize same target an end-to-end manner. The performance was evaluated via Dice similarity coefficient (DSC) Hausdorff distance 95th percentile (HD 95 ). Delineated HECKTOR challenge train by 5-fold cross-validation. Another 271 delineated PET/CTs three (MAASTRO, CRO, BERLIN) testing. Finally, facility-specific transfer learning applied investigate improvement against models. Results Encouraging localization results observed, achieving maximum omnidirectional center difference lower 6.8 cm yielded similar averaged cross-validation (CV) with DSC range 0.71–0.75, while CV HD 8.6, 10.7 9.8 mm Unet, 3D Unets, respectively. More 10% drop 40% increase observed if tested on cohorts directly. After training, testing all had best (0.70) MAASTRO cohort, (7.8 7.9 mm) CRO (0.76 0.67) BERLIN cohorts, (12.4 cohort. Conclusion outperformed other two most Facility-specific can potentially improve individual institutions, could achieve comparable or even Unet.

Язык: Английский

Процитировано

9

Extracting value from total-body PET/CT image data - the emerging role of artificial intelligence DOI Creative Commons
Lalith Kumar Shiyam Sundar,

Sebastian Gutschmayer,

Marcel Maenle

и другие.

Cancer Imaging, Год журнала: 2024, Номер 24(1)

Опубликована: Апрель 11, 2024

Abstract The evolution of Positron Emission Tomography (PET), culminating in the Total-Body PET (TB-PET) system, represents a paradigm shift medical imaging. This paper explores transformative role Artificial Intelligence (AI) enhancing clinical and research applications TB-PET Clinically, TB-PET’s superior sensitivity facilitates rapid imaging, low-dose imaging protocols, improved diagnostic capabilities higher patient comfort. In research, shows promise studying systemic interactions our understanding human physiology pathophysiology. parallel, AI’s integration into workflows—spanning from image acquisition to data analysis—marks significant development nuclear medicine. review delves current potential roles AI augmenting TB-PET/CT’s functionality utility. We explore how can streamline processes pioneer new applications, thereby maximising technology’s capabilities. discussion also addresses necessary steps considerations for effectively integrating TB-PET/CT practice. highlights efficiency challenges posed by increased complexity. conclusion, this exploration emphasises need collaborative approach field advocate shared resources open-source initiatives as crucial towards harnessing full AI/TB-PET synergy. effort is essential revolutionising ultimately leading advancements care research.

Язык: Английский

Процитировано

9

Automated Tumor Segmentation in Radiotherapy DOI Creative Commons
Ricky R. Savjani, Michael Lauria, Supratik Bose

и другие.

Seminars in Radiation Oncology, Год журнала: 2022, Номер 32(4), С. 319 - 329

Опубликована: Окт. 1, 2022

Autosegmentation of gross tumor volumes holds promise to decrease clinical demand and provide consistency across clinicians institutions for radiation treatment planning. Additionally, autosegmentation can enable imaging analyses such as radiomics construct deploy large studies with thousands patients. Here, we review modern results that utilize deep learning approaches segment tumors in 5 major sites: brain, head neck, thorax, abdomen, pelvis. We focus on inch closer adoption, highlighting winning entries international competitions, unique network architectures, novel ways overcoming specific challenges. also broadly discuss the future remaining barriers must be overcome before widespread replacement or augmentation manual contouring.

Язык: Английский

Процитировано

27

Automated Head and Neck Tumor Segmentation from 3D PET/CT HECKTOR 2022 Challenge Report DOI
Andriy Myronenko, Md Mahfuzur Rahman Siddiquee, Dong Yang

и другие.

Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 31 - 37

Опубликована: Янв. 1, 2023

Язык: Английский

Процитировано

16