PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer DOI Creative Commons
Baoqiang Ma, Jiapan Guo, Lisanne V. van Dijk

и другие.

Radiotherapy and Oncology, Год журнала: 2025, Номер unknown, С. 110852 - 110852

Опубликована: Март 1, 2025

In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers image-fusion strategies, could achieve comparable performance as SOTA models. The dataset comprises 489 oropharyngeal (OPC) from seven distinct centers. It was randomly divided into training (n = 369) an independent test 120). Furthermore, additional 400 OPC patients, who underwent chemo(radiotherapy) at our center, employed external testing. Each patients' data included pre-treatment CT- PET-scans, manually generated GTV (Gross tumour volume) contours primary tumors lymph nodes, RFP information. present compared against three developed on dataset. When inputting CT, early fusion (considering them different channels input) approach, DenseNet81 (with 81 layers) obtained internal C-index 0.69, metric Notably, removal input yielded same 0.69 while improving 0.59 to 0.63. PET-only models, when utilizing late (concatenation extracted features) PET, demonstrated superior values 0.68 0.66 both sets, better only set. basic architecture predictive par featuring more intricate architectures set, test. imaging

Язык: Английский

Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning DOI Creative Commons
Jaakko Sahlsten, Joel Jaskari, Kareem A. Wahid

и другие.

Communications Medicine, Год журнала: 2024, Номер 4(1)

Опубликована: Июнь 8, 2024

Abstract Background Radiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) manually segmented with high interobserver variability. This calls reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification its downstream utilization critical. Methods Here we propose uncertainty-aware deep learning OPC GTVp segmentation, illustrate utility of multiple applications. We examine two Bayesian (BDL) models eight measures, utilize large multi-institute dataset 292 PET/CT scans to systematically analyze our approach. Results show that uncertainty-based approach accurately predicts quality segmentation 86.6% cases, identifies low performance cases semi-automated correction, visualizes regions segmentations likely fail. Conclusions Our BDL-based analysis provides first-step towards more widespread implementation segmentation.

Язык: Английский

Процитировано

5

Enhancing the reliability of deep learning-based head and neck tumour segmentation using uncertainty estimation with multi-modal images DOI Creative Commons
Jintao Ren, Jonas Teuwen, Jasper Nijkamp

и другие.

Physics in Medicine and Biology, Год журнала: 2024, Номер 69(16), С. 165018 - 165018

Опубликована: Июль 26, 2024

Abstract Objective. Deep learning shows promise in autosegmentation of head and neck cancer (HNC) primary tumours (GTV-T) nodal metastases (GTV-N). However, errors such as including non-tumour regions or missing still occur. Conventional methods often make overconfident predictions, compromising reliability. Incorporating uncertainty estimation, which provides calibrated confidence intervals can address this issue. Our aim was to investigate the efficacy various estimation improving segmentation We evaluated their levels voxel predictions ability reveal potential errors. Approach. retrospectively collected data from 567 HNC patients with diverse sites multi-modality images (CT, PET, T1-, T2-weighted MRI) along clinical GTV-T/N delineations. Using nnUNet 3D pipeline, we compared seven methods, evaluating them based on accuracy (Dice similarity coefficient, DSC), calibration (Expected Calibration Error, ECE), (Uncertainty-Error overlap using DSC, UE-DSC). Main results. Evaluated hold-out test dataset ( n = 97), median DSC scores for GTV-T GTV-N across all had a narrow range, 0.73 0.76 0.78 0.80, respectively. In contrast, ECE exhibited wider 0.30 0.12 0.25 0.09 GTV-N. Similarly, UE-DSC also ranged broadly, 0.21 0.38 0.22 0.36 A probabilistic network—PhiSeg method consistently demonstrated best performance terms UE-DSC. Significance. study highlights importance enhancing reliability deep GTV. The results show that while be similar reliability, measured by error uncertainty-error overlap, varies significantly. Used visualisation maps, these may effectively pinpoint uncertainties at level.

Язык: Английский

Процитировано

5

Deep learning for [18F]fluorodeoxyglucose-PET-CT classification in patients with lymphoma: a dual-centre retrospective analysis DOI Creative Commons
Ida Häggström, Doris Leithner,

Jennifer Alvén

и другие.

The Lancet Digital Health, Год журнала: 2023, Номер 6(2), С. e114 - e125

Опубликована: Дек. 21, 2023

The rising global cancer burden has led to an increasing demand for imaging tests such as [

Язык: Английский

Процитировано

10

MedShapeNet – a large-scale dataset of 3D medical shapes for computer vision DOI
Jianning Li, Zongwei Zhou, Jiancheng Yang

и другие.

Biomedical Engineering / Biomedizinische Technik, Год журнала: 2024, Номер unknown

Опубликована: Дек. 29, 2024

Abstract Objectives The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models used. This seen growing popularity of ShapeNet (51,300 models) Princeton ModelNet (127,915 models). However, a large collection anatomical shapes (e.g., bones, organs, vessels) 3D surgical instruments missing. Methods We present MedShapeNet translate data-driven vision applications adapt state-of-the-art problems. As unique feature, we directly model majority on data real patients. use cases classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, printing. Results By now, includes 23 datasets with more than 100,000 that paired annotations (ground truth). Our freely accessible via web interface Python application programming can be for discriminative, reconstructive, variational benchmarks as well various virtual, augmented, or mixed reality, Conclusions contains will continue collect applications. project page is: https://medshapenet.ikim.nrw/ .

Язык: Английский

Процитировано

4

Probability maps for deep learning-based head and neck tumor segmentation: Graphical User Interface design and test DOI Creative Commons
Alessia de Biase, Liv Ziegfeld, Nanna M. Sijtsema

и другие.

Computers in Biology and Medicine, Год журнала: 2024, Номер 177, С. 108675 - 108675

Опубликована: Май 28, 2024

The different tumor appearance of head and neck cancer across imaging modalities, scanners, acquisition parameters accounts for the highly subjective nature manual segmentation task. variability contours is one causes lack generalizability suboptimal performance deep learning (DL) based auto-segmentation models. Therefore, a DL-based method was developed that outputs predicted probabilities each PET-CT voxel in form probability map instead fixed contour. aim this study to show DL-generated maps are clinically relevant, intuitive, more suitable solution assist radiation oncologists gross volume on images patients.

Язык: Английский

Процитировано

3

Overview of the Head and Neck Tumor Segmentation for Magnetic Resonance Guided Applications (HNTS-MRG) 2024 Challenge DOI Creative Commons
Kareem A. Wahid, Cem Dede, Dina El-Habashy

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 1 - 35

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

The prognostic value of pathologic lymph node imaging using deep learning-based outcome prediction in oropharyngeal cancer patients DOI Creative Commons
Baoqiang Ma, Alessia de Biase, Jiapan Guo

и другие.

Physics and Imaging in Radiation Oncology, Год журнала: 2025, Номер 33, С. 100733 - 100733

Опубликована: Янв. 1, 2025

Deep learning (DL) models can extract prognostic image features from pre-treatment PET/CT scans. The study objective was to explore the potential benefits of incorporating pathologic lymph node (PL) spatial information in addition that primary tumor (PT) DL-based for predicting local control (LC), regional (RC), distant-metastasis-free survival (DMFS), and overall (OS) oropharyngeal cancer (OPC) patients. included 409 OPC patients treated with definitive (chemo)radiotherapy between 2010 2022. Patient data, including scans, manually contoured PT (GTVp) PL (GTVln) structures, clinical variables, endpoints, were collected. Firstly, a method employed segment tumours PET/CT, resulting predicted probability maps (TPMp) (TPMln). Secondly, different combinations CT, PET, manual contours 300 used train outcome prediction each endpoint through 5-fold cross validation. Model performance, assessed by concordance index (C-index), evaluated using test set 100 Including improved C-index results all endpoints except LC. For LC, comparable C-indices (around 0.66) observed trained only those as additional structure. Models combined into single structure achieved highest 0.65 0.80 RC DMFS prediction, respectively. these target structures separate entities 0.70 OS. Incorporating performance RC, DMFS,

Язык: Английский

Процитировано

0

MRI-Based Head and Neck Tumor Segmentation Using nnU-Net with 15-Fold Cross-Validation Ensemble DOI Creative Commons
Frank N. Mol,

Luuk Van der Hoek,

Baoqiang Ma

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 179 - 190

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-Guided Radiotherapy DOI Creative Commons

Nikoo Moradi,

André Ferreira, Behrus Puladi

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 136 - 153

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Enhanced nnU-Net Architectures for Automated MRI Segmentation of Head and Neck Tumors in Adaptive Radiation Therapy DOI Creative Commons

Jessica Kächele,

Maximilian Zenk, Maximilian Rokuss

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 50 - 64

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0