
Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 75 - 86
Published: Jan. 1, 2025
Language: Английский
Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 75 - 86
Published: Jan. 1, 2025
Language: Английский
Computer Methods and Programs in Biomedicine, Journal Year: 2023, Volume and Issue: 244, P. 107939 - 107939
Published: Nov. 22, 2023
Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall (OS) using pre-treatment imaging head neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used an additional channel input DL improve model performance. However, binary mask GTVp directs focus network defined region only uniformly. models trained for have also been generate predicted probability maps (TPM) where each pixel value corresponds degree certainty that classified tumor. The aim this study was explore effect TPM extra CT- PET-based prediction oropharyngeal cancer (OPC) patients terms local control (LC), regional (RC), DMFS OS.
Language: Английский
Citations
16Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 135 - 143
Published: Jan. 1, 2023
Language: Английский
Citations
14Medical Physics, Journal Year: 2023, Volume and Issue: 51(3), P. 2096 - 2107
Published: Sept. 30, 2023
Abstract Background Radiotherapy (RT) combined with cetuximab is the standard treatment for patients inoperable head and neck cancers. Segmentation of (H&N) tumors a prerequisite radiotherapy planning but time‐consuming process. In recent years, deep convolutional neural networks (DCNN) have become de facto automated image segmentation. However, due to expensive computational cost associated enlarging field view in DCNNs, their ability model long‐range dependency still limited, this can result sub‐optimal segmentation performance objects background context spanning over long distances. On other hand, Transformer models demonstrated excellent capabilities capturing such information several semantic tasks performed on medical images. Purpose Despite impressive representation capacity vision transformer models, current transformer‐based suffer from inconsistent incorrect dense predictions when fed multi‐modal input data. We suspect that power self‐attention mechanism may be limited extracting complementary exists To end, we propose novel model, debuted, Cross‐modal Swin (SwinCross), cross‐modal attention (CMA) module incorporate feature extraction at multiple resolutions. Methods architecture 3D two main components: (1) integrating modalities (PET CT), (2) shifted window block learning modalities. evaluate efficacy our approach, conducted experiments ablation studies HECKTOR 2021 challenge dataset. compared method against nnU‐Net (the backbone top‐5 methods 2021) state‐of‐the‐art including UNETR UNETR. The employed five‐fold cross‐validation setup using PET CT Results Empirical evidence demonstrates proposed consistently outperforms comparative techniques. This success attributed CMA module's enhance inter‐modality representations between during head‐and‐neck tumor Notably, SwinCross surpasses across all five folds, showcasing its proficiency varying resolutions through modules. Conclusions introduced automating delineation Our incorporates cross‐modality module, enabling exchange features experimental results establish superiority improved correlations Furthermore, methodology holds applicability involving different imaging like SPECT/CT or PET/MRI. Code: https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation
Language: Английский
Citations
14Multimedia Tools and Applications, Journal Year: 2024, Volume and Issue: unknown
Published: May 23, 2024
Language: Английский
Citations
6IEEE Journal of Biomedical and Health Informatics, Journal Year: 2024, Volume and Issue: 28(9), P. 5447 - 5458
Published: May 28, 2024
Nasopharyngeal carcinoma (NPC) is a malignant tumor primarily treated by radiotherapy. Accurate delineation of the target essential for improving effectiveness However, segmentation performance current models unsatisfactory due to poor boundaries, large-scale volume variation, and labor-intensive nature manual In this paper, MMCA-Net, novel network NPC using PET/CT images that incorporates an innovative multimodal cross attention transformer (MCA-Transformer) modified U-Net architecture, introduced enhance modal fusion leveraging cross-attention mechanisms between CT PET data. Our method, tested against ten algorithms via fivefold cross-validation on samples from Sun Yat-sen University Cancer Center public HECKTOR dataset, consistently topped all four evaluation metrics with average Dice similarity coefficients 0.815 0.7944, respectively. Furthermore, ablation experiments were conducted demonstrate superiority our method over multiple baseline variant techniques. The proposed has promising potential application in other tasks.
Language: Английский
Citations
5Communications Medicine, Journal Year: 2024, Volume and Issue: 4(1)
Published: June 8, 2024
Abstract Background Radiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) manually segmented with high interobserver variability. This calls reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification its downstream utilization critical. Methods Here we propose uncertainty-aware deep learning OPC GTVp segmentation, illustrate utility of multiple applications. We examine two Bayesian (BDL) models eight measures, utilize large multi-institute dataset 292 PET/CT scans to systematically analyze our approach. Results show that uncertainty-based approach accurately predicts quality segmentation 86.6% cases, identifies low performance cases semi-automated correction, visualizes regions segmentations likely fail. Conclusions Our BDL-based analysis provides first-step towards more widespread implementation segmentation.
Language: Английский
Citations
5European Journal of Nuclear Medicine and Molecular Imaging, Journal Year: 2023, Volume and Issue: 50(6), P. 1720 - 1734
Published: Jan. 24, 2023
Language: Английский
Citations
11Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 61 - 69
Published: Jan. 1, 2023
Language: Английский
Citations
11Lecture notes in computer science, Journal Year: 2023, Volume and Issue: unknown, P. 144 - 153
Published: Jan. 1, 2023
Language: Английский
Citations
10Physics in Medicine and Biology, Journal Year: 2023, Volume and Issue: 68(11), P. 115006 - 115006
Published: May 4, 2023
Abstract Objective. Accurate segmentation of head and neck (H&N) tumors is critical in radiotherapy. However, the existing methods lack effective strategies to integrate local global information, strong semantic information context spatial channel features, which are clues improve accuracy tumor segmentation. In this paper, we propose a novel method called dual modules convolution transformer network (DMCT-Net) for H&N fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) images. Approach. The DMCT-Net consists block (CTB), squeeze excitation (SE) pool module, multi-attention fusion (MAF) module. First, CTB designed capture remote dependency multi-scale receptive field by using standard convolution, dilated operation. Second, extract feature from different angles, construct SE not only extracts features simultaneously but also uses normalization adaptively fuse adjust distribution. Third, MAF module proposed combine voxel-wise information. Besides, adopt up-sampling auxiliary paths supplement Main results. experimental results show that has better or more competitive performance than several advanced on three datasets. best metric scores as follows: DSC 0.781, HD95 3.044, precision 0.798, sensitivity 0.857. Comparative experiments based bimodal single modal indicate input provides sufficient improving performance. Ablation verify effectiveness significance each Significance. We new 3D FDG-PET/CT images, achieves high accuracy.
Language: Английский
Citations
9