Automatic Delineation and Prognostic Assessment of Head and Neck Tumor Lesion in Multi-Modality Positron Emission Tomography / Computed Tomography Images Based on Deep Learning: A Survey DOI
Zain Ul Abıdın, Rizwan Ali Naqvi, Muhammad Zubair Islam

и другие.

Neurocomputing, Год журнала: 2024, Номер 610, С. 128531 - 128531

Опубликована: Сен. 10, 2024

Язык: Английский

Noninvasive Molecular Subtyping of Pediatric Low-Grade Glioma with Self-Supervised Transfer Learning DOI
Divyanshu Tak, Zezhong Ye,

Anna Zapaischykova

и другие.

Radiology Artificial Intelligence, Год журнала: 2024, Номер 6(3)

Опубликована: Март 6, 2024

Purpose To develop and externally test a scan-to-prediction deep learning pipeline for noninvasive, MRI-based

Язык: Английский

Процитировано

18

Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT DOI
Vincent Andrearczyk, Valentin Oreiller,

Moamen Abobakr

и другие.

Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 1 - 30

Опубликована: Янв. 1, 2023

Язык: Английский

Процитировано

42

Development and Validation of an Automated Image-Based Deep Learning Platform for Sarcopenia Assessment in Head and Neck Cancer DOI Creative Commons
Zezhong Ye, Anurag Saraf, Yashwanth Ravipati

и другие.

JAMA Network Open, Год журнала: 2023, Номер 6(8), С. e2328280 - e2328280

Опубликована: Авг. 10, 2023

Sarcopenia is an established prognostic factor in patients with head and neck squamous cell carcinoma (HNSCC); the quantification of sarcopenia assessed by imaging typically achieved through skeletal muscle index (SMI), which can be derived from cervical segmentation cross-sectional area. However, manual labor intensive, prone to interobserver variability, impractical for large-scale clinical use.To develop externally validate a fully automated image-based deep learning platform vertebral SMI calculation evaluate associations survival treatment toxicity outcomes.For this study, model development data set was curated publicly available deidentified HNSCC treated at MD Anderson Cancer Center between January 1, 2003, December 31, 2013. A total 899 undergoing primary radiation abdominal computed tomography scans complete information were selected. An external validation retrospectively collected therapy 1996, 2013, Brigham Women's Hospital. The analysis performed May 2022, March 2023.C3 during HNSCC.Overall outcomes HNSCC.The patient cohort comprised (median [range] age, 58 [24-90] years; 140 female [15.6%] 755 male [84.0%]). Dice similarity coefficients (n = 96) internal test 48) 0.90 (95% CI, 0.90-0.91) 0.89-0.91), respectively, mean 96.2% acceptable rate 2 reviewers on testing 377). Estimated area values associated manually annotated (Pearson r 0.99; P < .001) across sets. On multivariable Cox proportional hazards regression, SMI-derived worse overall (hazard ratio, 2.05; 95% 1.04-4.04; .04) longer feeding tube duration [range], 162 [6-1477] vs 134 [15-1255] days; hazard 0.66; 0.48-0.89; .006) than no sarcopenia.This study's findings show pipeline accurately measure association important disease outcomes. could enable integration assessment into decision making individuals HNSCC.

Язык: Английский

Процитировано

22

Stepwise Transfer Learning for Expert-level Pediatric Brain Tumor MRI Segmentation in a Limited Data Scenario DOI
Aidan Boyd, Zezhong Ye, Sanjay P. Prabhu

и другие.

Radiology Artificial Intelligence, Год журнала: 2024, Номер 6(4)

Опубликована: Июль 1, 2024

Purpose To develop, externally test, and evaluate clinical acceptability of a deep learning pediatric brain tumor segmentation model using stepwise transfer learning. Materials Methods In this retrospective study, the authors leveraged two T2-weighted MRI datasets (May 2001 through December 2015) from national consortium (n = 184; median age, 7 years [range, 1-23 years]; 94 male patients) cancer center 100; 8 1-19 47 to develop neural networks for low-grade glioma approach maximize performance in limited data scenario. The best was tested on an independent test set subjected randomized blinded evaluation by three clinicians, wherein they assessed expert- artificial intelligence (AI)-generated segmentations via 10-point Likert scales Turing tests. Results AI used in-domain (median Dice score coefficient, 0.88 [IQR, 0.72-0.91] vs 0.812 0.56-0.89] baseline model; P .049). With external testing, yielded excellent accuracy reference standards experts similarity coefficients: expert 1, 0.83 0.75-0.90]; 2, 0.81 0.70-0.89]; 3, 0.68-0.88]; mean accuracy, 0.82). For benchmarking 100 scans), rated AI-based higher average compared with other score, 9 7-9] [IQR 7-9]) more as clinically acceptable (80.2% 65.4%). Experts correctly predicted origin 26.0% cases. Conclusion Stepwise enabled expert-level automated autosegmentation volumetric measurement high level acceptability. Keywords: Transfer Learning, Pediatric Brain Tumors, Segmentation, Deep Learning Supplemental material is available article. © RSNA, 2024.

Язык: Английский

Процитировано

6

Expert-level pediatric brain tumor segmentation in a limited data scenario with stepwise transfer learning DOI Creative Commons
Aidan Boyd, Zezhong Ye, Sanjay P. Prabhu

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2023, Номер unknown

Опубликована: Июнь 30, 2023

ABSTRACT Purpose Artificial intelligence (AI)-automated tumor delineation for pediatric gliomas would enable real-time volumetric evaluation to support diagnosis, treatment response assessment, and clinical decision-making. Auto-segmentation algorithms tumors are rare, due limited data availability, have yet demonstrate translation. Methods We leveraged two datasets from a national brain consortium (n=184) cancer center (n=100) develop, externally validate, clinically benchmark deep learning neural networks low-grade glioma (pLGG) segmentation using novel in-domain, stepwise transfer approach. The best model [via Dice similarity coefficient (DSC)] was validated subject randomized, blinded by three expert clinicians wherein assessed acceptability of expert- AI-generated segmentations via 10-point Likert scales Turing tests. Results AI utilized (median DSC: 0.877 [IQR 0.715-0.914]) versus baseline DSC 0.812 0.559-0.888]; p <0.05). On external testing (n=60), the yielded accuracy comparable inter-expert agreement 0.834 0.726-0.901] vs. 0.861 0.795-0.905], =0.13). benchmarking (n=100 scans, 300 3 experts), experts rated higher on average compared other rating: 9 7-9]) 7 7-9], <0.05 each). Additionally, had significantly ( <0.05) overall (80.2% 65.4%). Experts correctly predicted origins in an 26.0% cases. Conclusions Stepwise enabled expert-level, automated auto-segmentation measurement with high level acceptability. This approach may development translation imaging scenarios. Summary Authors proposed develop validate whose performance were par neuroradiologists radiation oncologists. Key Points There available train tumors, adult-centric models generalize poorly setting. demonstrated gains (Dice score: methodologies human validation. testing, received score rating Transfer-Encoder expert: 80.2% 65.4%) tests showed uniformly low ability experts’ identify as human-generated (mean accuracy: 26%).

Язык: Английский

Процитировано

12

Weighted Fusion Transformer for Dual PET/CT Head and Neck Tumor Segmentation DOI Creative Commons
Mohammed A. Mahdi,

Shahanawaj Ahamad,

Sawsan Ali Saad

и другие.

IEEE Access, Год журнала: 2024, Номер 12, С. 110905 - 110919

Опубликована: Янв. 1, 2024

Accurate tumor segmentation in PET/CT imaging is essential for the diagnosis and treatment of cancer, impacting therapeutic outcomes patient management. Our study introduces a new approach integrating Weighted Fusion Transformer Network to enhance volumes. This method synergizes PET CT modalities through FormerU-Net architecture that employs convolutional neural networks alongside transformer blocks, aiming leverage unique advantages each modality. We evaluated proposed using multi-institutional dataset, applying key performance metrics such as Dice Similarity Coefficient aggregate, Jaccard Index, Volume Correlation, Average Surface Distance assess precision. The results indicate CT/PET/Fusion strategy significantly improves delineation, outperforming traditional methods. main findings suggest this integrative could potentially redefine standard clinical practice. Lastly, offers promising direction enhancing accuracy oncological imaging, with implications improvement patient-specific strategies.

Язык: Английский

Процитировано

3

Automated imaging-based tumor burden and pre-treatment circulating tumor DNA in HPV-associated oropharynx cancer DOI
Mina Bakhtiar, Zezhong Ye, Jonathan D. Schoenfeld

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2025, Номер unknown

Опубликована: Янв. 16, 2025

ABSTRACT Background Artificial intelligence (AI)-based imaging analysis has applications for the diagnosis of head and neck malignancies, serum circulating tumor-associated DNA (ctDNA) is an emerging biomarker being evaluated response assessment risk stratification in human papilloma virus (HPV)-associated oropharynx squamous cell carcinoma (HPV-OPSCC). The relationship between automated biomarkers ctDNA not yet been explored. Objective To test association AI-derived measures tumor burden among patients with HPV-OPSCC. Design, Setting, Participants This cross-sectional study included who were treated curative intent HPV-OPSCC 2020-2023, prospectively enrolled on a blood collection protocol (Clinical trials.gov identifier: NCT04965792 ). Exposures Clinical factors including demographics, AJCC 8 th edition clinical staging, HPV genotype. Main Outcomes Measures Pre-treatment measurement tumor-tissue modified viral (TTMV) HPV-DNA using commercially available test, measured as continuous value (fragments/mL). Primary nodal volumes, total volume, cystic/necrotic volume generated pre-treatment diagnostic or radiation CT- planning scans validated AI auto-segmentation algorithm. Assessments model fit: Akaike information criterion (AIC) Bayesian (BIC). Results 170 study. On univariable regression, primary (coeff=39.43, p<0.001), (coeff=39.54, Tumor (T) stage (coeff = 1031.09, p=0.009), Nodal (N) (coeff=1840, p=0.018), subtype 16 (coeff=3072.40, p=0.006), CCI (coeff=-596.60, p=0.038) associated ctDNA. Cystic was (coeff=0.31, p=0.11). multivariable analysis, volumes (coeff=34.79, p=0.001 coeff=24.68, p=0.022, respectively), but T N (coeff=-439.28, p=0.37 coeff=238.19, p=0.29, respectively). Including improved fit compared to alone (3420.96 vs 3435.88 AIC, 3449.18 3457.83 BIC). Conclusions Relevance AI-automated volumetrics pretreatment are independently ctDNA, controlling stage. stronger than staging predictive capacity regression models. may provide practical correlate levels help stratify patients.

Язык: Английский

Процитировано

0

Artificial Intelligence Measured Tumor Burden and Pre‐Treatment Circulating Tumor DNA in Human Papilloma Virus‐Associated Oropharynx Cancer DOI
Mina Bakhtiar, Zezhong Ye, Jonathan D. Schoenfeld

и другие.

Head & Neck, Год журнала: 2025, Номер unknown

Опубликована: Май 5, 2025

ABSTRACT Background Artificial intelligence (AI)‐based imaging analysis and circulating tumor‐associated DNA (ctDNA) are both being used diagnostically in HPV‐driven oropharynx squamous cell carcinoma (HPV‐OPSCC). We evaluated associations between AI‐measured tumor burden ctDNA. Methods analyzed 170 patients treated definitively for HPV‐OPSCC. All had pre‐treatment serum tumor‐tissue modified viral (TTMV) ctDNA levels. An AI algorithm measured lymph nodes on CT scans. Linear regressions detected (fragments/mL) automated volumes, clinical (T) nodal (N) stage, disease factors. Results Automated volume (coeff = 39.43, p < 0.001), 39.54, T stage 1031.09, 0.009), N 1840, 0.018) were associated with On multivariable analysis, 34.79, 0.001) volumes 24.68, 0.022) ctDNA; not. Conclusions volumetrics independently more strongly ctDNA, compared stage. provide a practical correlate to

Язык: Английский

Процитировано

0

Noninvasive molecular subtyping of pediatric low-grade glioma with self-supervised transfer learning DOI Creative Commons
Divyanshu Tak, Zezhong Ye, Anna Zapaishchykova

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2023, Номер unknown

Опубликована: Авг. 8, 2023

To develop and externally validate a scan-to-prediction deep-learning pipeline for noninvasive, MRI-based BRAF mutational status classification pLGG.

Язык: Английский

Процитировано

3

Light-UNet++: A Simplified U-NET++ Architecture for Multimodal Biomedical Image Segmentation DOI
Suchismita Das,

Srijib Bose,

Ritu Ritu

и другие.

Опубликована: Апрель 21, 2023

Abstract-Images have been the most comprehensive data source in field of healthcare but likewise one challenging ones to analyze. Deep Learning, since its inception, has revolutionized realm medical imaging, investigation, and diagnosis. Conventional CNN algorithms gave fair results more manageable image segmentation problems could not prove themselves on complex ones. That is where U-NET++ comes into picture, architecture for fast precise images. Because ability combine low-level high-level information, demonstrated strength segmentation. Low-level erudition aids improving accuracy, whereas efficiently extracting features. A wide range experiments were carried various datasets. Despite excellent overall performance multimodal images, it noticed that conventional U-Net requires modifications certain areas. The proposed approach consists a with modified pooling function. Instead max pooling, model uses mixed combination pool avg pool. accomplishment design tested delivers impressive achieved an F1 score 0.9069, sensitivity 0.9984 specificity 0.8075. model's accuracy was 0.9966, Jaccard 0.6885 mean squared error (MSE) 0.0033. Additionally, receiver operating characteristic (ROC) area under curve (AUC) 0.9029. findings suggest outperform metrics existing architecture, thereby establishing effectiveness Light-UNet++ tasks. Since early detection any abnormality crucial, method will help timely accurate diagnosis launch appropriate treatment at earliest possible stage.

Язык: Английский

Процитировано

2