Head and Neck Tumor Segmentation of MRI from Pre- and Mid-Radiotherapy with Pre-Training, Data Augmentation and Dual Flow UNet DOI Creative Commons

Litingyu Wang,

Wenjun Liao, Shichuan Zhang

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 75 - 86

Published: Jan. 1, 2025

Language: Английский

Deep learning-based outcome prediction using PET/CT and automatically predicted probability maps of primary tumor in patients with oropharyngeal cancer DOI Creative Commons
Alessia de Biase, Baoqiang Ma, Jiapan Guo

et al.

Computer Methods and Programs in Biomedicine, Journal Year: 2023, Volume and Issue: 244, P. 107939 - 107939

Published: Nov. 22, 2023

Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall (OS) using pre-treatment imaging head neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used an additional channel input DL improve model performance. However, binary mask GTVp directs focus network defined region only uniformly. models trained for have also been generate predicted probability maps (TPM) where each pixel value corresponds degree certainty that classified tumor. The aim this study was explore effect TPM extra CT- PET-based prediction oropharyngeal cancer (OPC) patients terms local control (LC), regional (RC), DMFS OS.

Language: Английский

Citations

16

Radiomics-Enhanced Deep Multi-task Learning for Outcome Prediction in Head and Neck Cancer DOI
Mingyuan Meng,

Lei Bi,

Dagan Feng

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 135 - 143

Published: Jan. 1, 2023

Language: Английский

Citations

14

SwinCross: Cross‐modal Swin transformer for head‐and‐neck tumor segmentation in PET/CT images DOI Open Access

Gary Y. Li,

Junyu Chen, Se‐In Jang

et al.

Medical Physics, Journal Year: 2023, Volume and Issue: 51(3), P. 2096 - 2107

Published: Sept. 30, 2023

Abstract Background Radiotherapy (RT) combined with cetuximab is the standard treatment for patients inoperable head and neck cancers. Segmentation of (H&N) tumors a prerequisite radiotherapy planning but time‐consuming process. In recent years, deep convolutional neural networks (DCNN) have become de facto automated image segmentation. However, due to expensive computational cost associated enlarging field view in DCNNs, their ability model long‐range dependency still limited, this can result sub‐optimal segmentation performance objects background context spanning over long distances. On other hand, Transformer models demonstrated excellent capabilities capturing such information several semantic tasks performed on medical images. Purpose Despite impressive representation capacity vision transformer models, current transformer‐based suffer from inconsistent incorrect dense predictions when fed multi‐modal input data. We suspect that power self‐attention mechanism may be limited extracting complementary exists To end, we propose novel model, debuted, Cross‐modal Swin (SwinCross), cross‐modal attention (CMA) module incorporate feature extraction at multiple resolutions. Methods architecture 3D two main components: (1) integrating modalities (PET CT), (2) shifted window block learning modalities. evaluate efficacy our approach, conducted experiments ablation studies HECKTOR 2021 challenge dataset. compared method against nnU‐Net (the backbone top‐5 methods 2021) state‐of‐the‐art including UNETR UNETR. The employed five‐fold cross‐validation setup using PET CT Results Empirical evidence demonstrates proposed consistently outperforms comparative techniques. This success attributed CMA module's enhance inter‐modality representations between during head‐and‐neck tumor Notably, SwinCross surpasses across all five folds, showcasing its proficiency varying resolutions through modules. Conclusions introduced automating delineation Our incorporates cross‐modality module, enabling exchange features experimental results establish superiority improved correlations Furthermore, methodology holds applicability involving different imaging like SPECT/CT or PET/MRI. Code: https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation

Language: Английский

Citations

14

Brain tumour detection using machine and deep learning: a systematic review DOI
Novsheena Rasool, Javaid Iqbal Bhat

Multimedia Tools and Applications, Journal Year: 2024, Volume and Issue: unknown

Published: May 23, 2024

Language: Английский

Citations

6

MMCA-NET: A Multimodal Cross Attention Transformer Network for Nasopharyngeal Carcinoma Tumor Segmentation Based on a Total-Body PET/CT System DOI
Wenjie Zhao, Zhenxing Huang, Si Tang

et al.

IEEE Journal of Biomedical and Health Informatics, Journal Year: 2024, Volume and Issue: 28(9), P. 5447 - 5458

Published: May 28, 2024

Nasopharyngeal carcinoma (NPC) is a malignant tumor primarily treated by radiotherapy. Accurate delineation of the target essential for improving effectiveness However, segmentation performance current models unsatisfactory due to poor boundaries, large-scale volume variation, and labor-intensive nature manual In this paper, MMCA-Net, novel network NPC using PET/CT images that incorporates an innovative multimodal cross attention transformer (MCA-Transformer) modified U-Net architecture, introduced enhance modal fusion leveraging cross-attention mechanisms between CT PET data. Our method, tested against ten algorithms via fivefold cross-validation on samples from Sun Yat-sen University Cancer Center public HECKTOR dataset, consistently topped all four evaluation metrics with average Dice similarity coefficients 0.815 0.7944, respectively. Furthermore, ablation experiments were conducted demonstrate superiority our method over multiple baseline variant techniques. The proposed has promising potential application in other tasks.

Language: Английский

Citations

5

Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning DOI Creative Commons
Jaakko Sahlsten, Joel Jaskari, Kareem A. Wahid

et al.

Communications Medicine, Journal Year: 2024, Volume and Issue: 4(1)

Published: June 8, 2024

Abstract Background Radiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) manually segmented with high interobserver variability. This calls reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification its downstream utilization critical. Methods Here we propose uncertainty-aware deep learning OPC GTVp segmentation, illustrate utility of multiple applications. We examine two Bayesian (BDL) models eight measures, utilize large multi-institute dataset 292 PET/CT scans to systematically analyze our approach. Results show that uncertainty-based approach accurately predicts quality segmentation 86.6% cases, identifies low performance cases semi-automated correction, visualizes regions segmentations likely fail. Conclusions Our BDL-based analysis provides first-step towards more widespread implementation segmentation.

Language: Английский

Citations

5

Radiomics prognostic analysis of PET/CT images in a multicenter head and neck cancer cohort: investigating ComBat strategies, sub-volume characterization, and automatic segmentation DOI
Hui Xu, Nassib Abdallah, Jean-Marie Marion

et al.

European Journal of Nuclear Medicine and Molecular Imaging, Journal Year: 2023, Volume and Issue: 50(6), P. 1720 - 1734

Published: Jan. 24, 2023

Language: Английский

Citations

11

Head and Neck Primary Tumor and Lymph Node Auto-segmentation for PET/CT Scans DOI

Arnav Jain,

Julia Huang,

Yashwanth Ravipati

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 61 - 69

Published: Jan. 1, 2023

Language: Английский

Citations

11

Recurrence-Free Survival Prediction Under the Guidance of Automatic Gross Tumor Volume Segmentation for Head and Neck Cancers DOI
Kai Wang, Yunxiang Li, Michael Dohopolski

et al.

Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 144 - 153

Published: Jan. 1, 2023

Language: Английский

Citations

10

DMCT-Net: dual modules convolution transformer network for head and neck tumor segmentation in PET/CT DOI
Jiao Wang, Yanjun Peng, Yanfei Guo

et al.

Physics in Medicine and Biology, Journal Year: 2023, Volume and Issue: 68(11), P. 115006 - 115006

Published: May 4, 2023

Abstract Objective. Accurate segmentation of head and neck (H&N) tumors is critical in radiotherapy. However, the existing methods lack effective strategies to integrate local global information, strong semantic information context spatial channel features, which are clues improve accuracy tumor segmentation. In this paper, we propose a novel method called dual modules convolution transformer network (DMCT-Net) for H&N fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) images. Approach. The DMCT-Net consists block (CTB), squeeze excitation (SE) pool module, multi-attention fusion (MAF) module. First, CTB designed capture remote dependency multi-scale receptive field by using standard convolution, dilated operation. Second, extract feature from different angles, construct SE not only extracts features simultaneously but also uses normalization adaptively fuse adjust distribution. Third, MAF module proposed combine voxel-wise information. Besides, adopt up-sampling auxiliary paths supplement Main results. experimental results show that has better or more competitive performance than several advanced on three datasets. best metric scores as follows: DSC 0.781, HD95 3.044, precision 0.798, sensitivity 0.857. Comparative experiments based bimodal single modal indicate input provides sufficient improving performance. Ablation verify effectiveness significance each Significance. We new 3D FDG-PET/CT images, achieves high accuracy.

Language: Английский

Citations

9