PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer DOI Creative Commons
Baoqiang Ma, Jiapan Guo, Lisanne V. van Dijk

и другие.

Radiotherapy and Oncology, Год журнала: 2025, Номер unknown, С. 110852 - 110852

Опубликована: Март 1, 2025

In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers image-fusion strategies, could achieve comparable performance as SOTA models. The dataset comprises 489 oropharyngeal (OPC) from seven distinct centers. It was randomly divided into training (n = 369) an independent test 120). Furthermore, additional 400 OPC patients, who underwent chemo(radiotherapy) at our center, employed external testing. Each patients' data included pre-treatment CT- PET-scans, manually generated GTV (Gross tumour volume) contours primary tumors lymph nodes, RFP information. present compared against three developed on dataset. When inputting CT, early fusion (considering them different channels input) approach, DenseNet81 (with 81 layers) obtained internal C-index 0.69, metric Notably, removal input yielded same 0.69 while improving 0.59 to 0.63. PET-only models, when utilizing late (concatenation extracted features) PET, demonstrated superior values 0.68 0.66 both sets, better only set. basic architecture predictive par featuring more intricate architectures set, test. imaging

Язык: Английский

Head and Neck Tumor Segmentation on MRIs with Fast and Resource-Efficient Staged nnU-Nets DOI Creative Commons
Elias Tappeiner, Christian Gapp, Martin Welk

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 87 - 98

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Head and Neck Gross Tumor Volume Automatic Segmentation Using PocketNet DOI Creative Commons
Awj Twam, Adrian Celaya,

Evan Lim

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 241 - 249

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Ensemble Deep Learning Models for Automated Segmentation of Tumor and Lymph Node Volumes in Head and Neck Cancer Using Pre- and Mid-Treatment MRI: Application of Auto3DSeg and SegResNet DOI Creative Commons
Dominic LaBella

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 259 - 273

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Pixel level deep reinforcement learning for accurate and robust medical image segmentation DOI Creative Commons
Yunxin Liu, Di Yuan, Zhenghua Xu

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 10, 2025

Existing deep learning methods have achieved significant success in medical image segmentation. However, this largely relies on stacking advanced modules and architectures, which has created a path dependency. This dependency is unsustainable, as it leads to increasingly larger model parameters higher deployment costs. To break dependency, we introduce reinforcement enhance segmentation performance. current face challenges such high training cost, independent iterative processes, uncertainty of masks. Consequently, propose Pixel-level Deep Reinforcement Learning with pixel-by-pixel Mask Generation (PixelDRL-MG) for more accurate robust PixelDRL-MG adopts dynamic update policy, directly segmenting the regions interest without requiring user interaction or coarse We Asynchronous Advantage Actor-Critic (PA3C) strategy treat each pixel an agent whose state (foreground background) iteratively updated through direct actions. Our experiments two commonly used datasets demonstrate that achieves superior performances than state-of-the-art baselines (especially boundaries) using significantly fewer parameters. also conducted detailed ablation studies understanding facilitate practical application. Additionally, performs well low-resource settings (i.e., 50-shot 100-shot), making ideal choice real-world scenarios.

Язык: Английский

Процитировано

0

PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer DOI Creative Commons
Baoqiang Ma, Jiapan Guo, Lisanne V. van Dijk

и другие.

Radiotherapy and Oncology, Год журнала: 2025, Номер unknown, С. 110852 - 110852

Опубликована: Март 1, 2025

In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers image-fusion strategies, could achieve comparable performance as SOTA models. The dataset comprises 489 oropharyngeal (OPC) from seven distinct centers. It was randomly divided into training (n = 369) an independent test 120). Furthermore, additional 400 OPC patients, who underwent chemo(radiotherapy) at our center, employed external testing. Each patients' data included pre-treatment CT- PET-scans, manually generated GTV (Gross tumour volume) contours primary tumors lymph nodes, RFP information. present compared against three developed on dataset. When inputting CT, early fusion (considering them different channels input) approach, DenseNet81 (with 81 layers) obtained internal C-index 0.69, metric Notably, removal input yielded same 0.69 while improving 0.59 to 0.63. PET-only models, when utilizing late (concatenation extracted features) PET, demonstrated superior values 0.68 0.66 both sets, better only set. basic architecture predictive par featuring more intricate architectures set, test. imaging

Язык: Английский

Процитировано

0