A method for sensitivity analysis of automatic contouring algorithms across different contrast weightings using synthetic magnetic resonance imaging DOI Creative Commons
Lucas McCullum,

Zayne Belal,

Warren Floyd

et al.

Physics and Imaging in Radiation Oncology, Journal Year: 2025, Volume and Issue: unknown, P. 100790 - 100790

Published: May 1, 2025

Language: Английский

HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge DOI Creative Commons
Gašper Podobnik, Bulat Ibragimov, Elias Tappeiner

et al.

Radiotherapy and Oncology, Journal Year: 2024, Volume and Issue: 198, P. 110410 - 110410

Published: June 24, 2024

To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit information computed tomography (CT) magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head Neck Organ-at-Risk CT MR Segmentation Challenge. challenge task was to automatically segment 30 organs-at-risk (OARs) HaN region in 14 withheld test cases given availability 42 publicly available training cases. Each case consisted one contrast-enhanced T1-weighted image same patient, with up corresponding reference OAR delineation masks. performance evaluated terms Dice similarity coefficient (DSC) 95-percentile Hausdorff distance (HD95), statistical ranking applied each metric by pairwise comparison submitted using Wilcoxon signed-rank test. While 23 teams registered challenge, only seven their final phase. top-performing team achieved a DSC 76.9 % HD95 3.5 mm. All participating utilized architectures based on U-Net, winning leveraging rigid registration combined network entry-level concatenation both modalities. This simulated real-world clinical scenario providing non-registered images varying fields-of-view voxel sizes. Remarkably, segmentation surpassing inter-observer agreement dataset. These results set benchmark future research this dataset paired multi-modal general.

Language: Английский

Citations

10

A Method for Sensitivity Analysis of Automatic Contouring Algorithms Across Different MRI Contrast Weightings Using SyntheticMR DOI Creative Commons
Lucas McCullum,

Zayne Belal,

Warren Floyd

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 12, 2025

Currently, a majority of institution-specific automatic MRI-based contouring algorithms are trained, tested, and validated on one contrast weighting (i.e., T2-weighted), however their actual performance within this across different repetition times, TR, echo TE) is under-investigated poorly understood. As result, external institutions with scan protocols for the same may experience sub-optimal performance. The purpose study was to develop method evaluate robustness varying MRI weightings. One healthy volunteer patient scanned using SyntheticMR MR-Simulation device. parotid submandibular glands in these subjects were contoured an algorithm trained T2-weighted MRIs. For ground truth manual contours, two radiation oncology residents pre-resident physician recruited STAPLE consensus determined. A total 216 TR TE combinations simulated T1-, T2-, PD-weighted ranges SyntheticMR's post-processing software, SyMRI. Comparisons between contours determined Dice similarity coefficient (DSC) 95 th percentile Hausdorff distance (HD95). Notable differences model's seen contrast-weighted range, even range. Further, some models performed as well or better subsets T1-weighted range saw worst discrepancy DSC HD95 exceeded 0.2 3.66 mm, respectively, structures. In region where model 100%, 40%, 24%, 57% left parotid, right submandibular, gland, interobserver variability. This demonstrates variable combinations. methodology could be applied future studies evaluating sensitivity, out distribution detection ability, drift.

Language: Английский

Citations

0

A general model for head and neck auto‐segmentation with patient pre‐treatment imaging during adaptive radiation therapy DOI Creative Commons

Brett W. Clark,

Nicholas Hardcastle,

Mathieu Gaudreault

et al.

Medical Physics, Journal Year: 2025, Volume and Issue: unknown

Published: March 7, 2025

During head and neck (HN) radiation therapy, patients may undergo anatomical changes due to tumor shrinkage or weight loss. For these patients, adaptive therapy (ART) is required correct treatment plans ensure that the prescribed dose delivered while minimizing surrounding organs-at-risk (OARs). Patient pre-treatment images segmentation labels are always available during ART be incorporated into deep learning (DL) auto-segmentation models improve performance on mid-treatment images. Existing DL methods typically incorporate data training. In this work, we investigated whether including at inference time would affect model performance, as inference-time inclusion eliminate requirement for costly retraining new patient cohorts. We developed a general (GAM) included through additional input channels. compared GAM with patient-specific (PSM), which training, reference (RM), did not include data, rigid image registration (RIR) method. Models were using large dataset of pre- computed tomography (primary gross volume [GTVp] 16 OARs) 110 who underwent HN cancer. The showed improved over PSM RM several structures, largest differences in dice similarity coefficient difficult-to-segment structures: GTVp (RM: 0.17, PSM: 0.36, GAM: 0.61, RR: 0.65) left/right brachial plexus 0.38/0.35, 0.43/0.43, 0.49/0.49, 0.36/0.38). attained similar RR all structures except brainstem (GAM: 0.82, 0.74), mandible 0.88, 0.68), spinal cord 0.76, 0.51), performed higher. can ART, particular high variability low contrast. Including give improvements standard OARs, eliminating need However, provides most OARs.

Language: Английский

Citations

0

A method for sensitivity analysis of automatic contouring algorithms across different contrast weightings using synthetic magnetic resonance imaging DOI Creative Commons
Lucas McCullum,

Zayne Belal,

Warren Floyd

et al.

Physics and Imaging in Radiation Oncology, Journal Year: 2025, Volume and Issue: unknown, P. 100790 - 100790

Published: May 1, 2025

Language: Английский

Citations

0