Automated Head and Neck Tumor Segmentation from 3D PET/CT HECKTOR 2022 Challenge Report DOI
Andriy Myronenko, Md Mahfuzur Rahman Siddiquee, Dong Yang

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 31 - 37

Published: Jan. 1, 2023

Language: Английский

SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma DOI
Xiangde Luo,

Jia Fu,

Yunxin Zhong

et al.

Medical Image Analysis, Journal Year: 2025, Volume and Issue: 101, P. 103447 - 103447

Published: Jan. 2, 2025

Language: Английский

Citations

2

Synthetic data as an enabler for machine learning applications in medicine DOI Creative Commons
Jean-François Rajotte, Robert V. Bergen, David L. Buckeridge

et al.

iScience, Journal Year: 2022, Volume and Issue: 25(11), P. 105331 - 105331

Published: Oct. 13, 2022

Synthetic data generation is the process of using machine learning methods to train a model that captures patterns in real dataset. Then new or synthetic can be generated from trained model. The does not have one-to-one mapping original patients, and therefore has potential privacy preserving properties. There growing interest application across health life sciences, but fully realize benefits, further education, research, policy innovation required. This article summarizes opportunities challenges SDG for data, provides directions how this technology leveraged accelerate access secondary purposes.

Language: Английский

Citations

70

Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT DOI
Vincent Andrearczyk, Valentin Oreiller,

Moamen Abobakr

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 1 - 30

Published: Jan. 1, 2023

Language: Английский

Citations

41

A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy DOI Creative Commons

K. Mackay,

D. Bernstein, Ben Glocker

et al.

Clinical Oncology, Journal Year: 2023, Volume and Issue: 35(6), P. 354 - 369

Published: Jan. 31, 2023

Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year assesses need for standardised practice. A PubMed literature search was undertaken papers evaluating 2021. Papers were assessed types metric methodology generate ground-truth comparators. Our identified 212 studies, which 117 met criteria review. Geometric 116 (99.1%). includes Dice Similarity Coefficient 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric time-saving less frequently 22 (18.8%), 27 (23.1%) 18 (15.4%) respectively. There heterogeneity within each category metric. Over 90 different names geometric measures used. Methods qualitative all but two papers. Variation existed methods plans assessment. Consideration editing time only given 11 (9.4%) single manual contour a comparator 65 (55.6%) Only 31 (26.5%) compared auto-contours usual inter- and/or intra-observer variation. In conclusion, significant variation exists research accuracy automatically generated contours. are most popular, however their utility is unknown. perform Considering stages system implementation may provide framework decide appropriate metrics. analysis supports auto-contouring.

Language: Английский

Citations

39

Screening for extranodal extension in HPV-associated oropharyngeal carcinoma: evaluation of a CT-based deep learning algorithm in patient data from a multicentre, randomised de-escalation trial DOI Creative Commons
Benjamin H. Kann, Jirapat Likitlersuang,

Dennis Bontempi

et al.

The Lancet Digital Health, Journal Year: 2023, Volume and Issue: 5(6), P. e360 - e369

Published: April 21, 2023

Language: Английский

Citations

36

The autoPET challenge: Towards fully automated lesion segmentation in oncologic PET/CT imaging DOI Creative Commons
Sergios Gatidis,

Marcel Früh,

Matthias P. Fabritius

et al.

Research Square (Research Square), Journal Year: 2023, Volume and Issue: unknown

Published: June 14, 2023

Abstract We describe the results of autoPET challenge, a biomedical image analysis challenge aimed to motivate and focus research in field automated whole-body PET/CT analysis. The task was segmentation metabolically active tumor lesions on FDG-PET/CT. Challenge participants had access one largest publicly available annotated data sets for algorithm training. Over 350 teams from all continents registered challenge; seven best-performing contributions were awarded at MICCAI annual meeting 2022. Based we conclude that lesion is feasible with high accuracy using state-of-the-art deep learning methods. observed performance this may primarily rely quality quantity input less technical details underlying architecture. Future iterations will clinical translation.

Language: Английский

Citations

31

Auto-segmentation of head and neck tumors in positron emission tomography images using non-local means and morphological frameworks DOI Open Access

Sahel Heydarheydari,

Mohammad Javad Tahmasebi Birgani, Seyed Masoud Rezaeijo

et al.

Polish Journal of Radiology, Journal Year: 2023, Volume and Issue: 88, P. 365 - 370

Published: Aug. 14, 2023

Accurately segmenting head and neck cancer (HNC) tumors in medical images is crucial for effective treatment planning. However, current methods HNC segmentation are limited their accuracy efficiency. The present study aimed to design a model three-dimensional (3D) positron emission tomography (PET) using Non-Local Means (NLM) morphological operations.The proposed was tested data from the HECKTOR challenge public dataset, which included 408 patient with tumors. NLM utilized image noise reduction preservation of critical information. Following pre-processing, operations were used assess similarity intensity edge information within images. Dice score, Intersection Over Union (IoU), evaluate manual predicted results.The achieved an average score 81.47 ± 3.15, IoU 80 4.5, 94.03 4.44, demonstrating its effectiveness PET images.The algorithm provides capability produce patient-specific tumor without interaction, addressing limitations segmentation. has potential improve planning aid development personalized medicine. Additionally, this can be extended effectively segment other organs annotated

Language: Английский

Citations

31

Comparison of deep learning networks for fully automated head and neck tumor delineation on multi-centric PET/CT images DOI Creative Commons
Yiling Wang, Elia Lombardo, Lili Huang

et al.

Radiation Oncology, Journal Year: 2024, Volume and Issue: 19(1)

Published: Jan. 8, 2024

Abstract Objectives Deep learning-based auto-segmentation of head and neck cancer (HNC) tumors is expected to have better reproducibility than manual delineation. Positron emission tomography (PET) computed (CT) are commonly used in tumor segmentation. However, current methods still face challenges handling whole-body scans where a selection bounding box may be required. Moreover, different institutions might apply guidelines for This study aimed at exploring the auto-localization segmentation HNC from entire PET/CT investigating transferability trained baseline models external real world cohorts. Methods We employed 2D Retina Unet find utilized regular segment union involved lymph nodes. In comparison, 2D/3D Unets were also implemented localize same target an end-to-end manner. The performance was evaluated via Dice similarity coefficient (DSC) Hausdorff distance 95th percentile (HD 95 ). Delineated HECKTOR challenge train by 5-fold cross-validation. Another 271 delineated PET/CTs three (MAASTRO, CRO, BERLIN) testing. Finally, facility-specific transfer learning applied investigate improvement against models. Results Encouraging localization results observed, achieving maximum omnidirectional center difference lower 6.8 cm yielded similar averaged cross-validation (CV) with DSC range 0.71–0.75, while CV HD 8.6, 10.7 9.8 mm Unet, 3D Unets, respectively. More 10% drop 40% increase observed if tested on cohorts directly. After training, testing all had best (0.70) MAASTRO cohort, (7.8 7.9 mm) CRO (0.76 0.67) BERLIN cohorts, (12.4 cohort. Conclusion outperformed other two most Facility-specific can potentially improve individual institutions, could achieve comparable or even Unet.

Language: Английский

Citations

9

Deep learning aided oropharyngeal cancer segmentation with adaptive thresholding for predicted tumor probability in FDG PET and CT images DOI Creative Commons
Alessia de Biase, Nanna M. Sijtsema, Lisanne V. van Dijk

et al.

Physics in Medicine and Biology, Journal Year: 2023, Volume and Issue: 68(5), P. 055013 - 055013

Published: Feb. 7, 2023

Tumor segmentation is a fundamental step for radiotherapy treatment planning. To define an accurate of the primary tumor (GTVp) oropharyngeal cancer patients (OPC), simultaneous assessment different image modalities needed, and each volume explored slice-by-slice from orientations. Moreover, manual fixed boundary neglects spatial uncertainty known to occur in delineation. This study proposes novel automatic deep learning (DL) model assist radiation oncologists adaptive GTVp on registered FDG PET/CT images. We included 138 OPC treated with (chemo)radiation our institute. Our DL framework exploits both inter intra-slice context. Sequences 3 consecutive 2D slices concatenated images contours were used as input. A 3-fold cross validation was performed three times, training sequences extracted Axial (A), Sagittal (S), Coronal (C) plane 113 patients. Since contain overlapping slices, slice resulted outcome predictions that averaged. In A, S, C planes, output shows areas probabilities predicting tumor. The performance models assessed 25 at probability thresholds using mean Dice Score Coefficient (DSC). Predictions closest ground truth threshold 0.9 (DSC 0.70 0.77 0.80 plane). promising results proposed show maps could guide segmentation.

Language: Английский

Citations

21

A comparative study of attention mechanism based deep learning methods for bladder tumor segmentation DOI
Qi Zhang,

Yinglu Liang,

Yi Zhang

et al.

International Journal of Medical Informatics, Journal Year: 2023, Volume and Issue: 171, P. 104984 - 104984

Published: Jan. 5, 2023

Language: Английский

Citations

20