Vision Transformer–based Deep Learning Models Accelerate Further Research for Predicting Neurosurgical Intervention DOI
Kengo Takahashi, Takuma Usuzaki, Ryusei Inamori

et al.

Radiology Artificial Intelligence, Journal Year: 2024, Volume and Issue: 6(4)

Published: June 12, 2024

Language: Английский

Identifying key factors for predicting O6-Methylguanine-DNA methyltransferase status in adult patients with diffuse glioma: a multimodal analysis of demographics, radiomics, and MRI by variable Vision Transformer DOI Creative Commons
Takuma Usuzaki, Kengo Takahashi, Ryusei Inamori

et al.

Neuroradiology, Journal Year: 2024, Volume and Issue: 66(5), P. 761 - 773

Published: March 12, 2024

Abstract Purpose This study aimed to perform multimodal analysis by vision transformer (vViT) in predicting O6-methylguanine-DNA methyl transferase (MGMT) promoter status among adult patients with diffuse glioma using demographics (sex and age), radiomic features, MRI. Methods The training test datasets contained 122 1,570 images 30 484 images, respectively. features were extracted from enhancing tumors (ET), necrotic tumor cores (NCR), the peritumoral edematous/infiltrated tissues (ED) contrast-enhanced T1-weighted (CE-T1WI) T2-weighted (T2WI). vViT had 9 sectors; 1 demographic sector, 6 sectors (CE-T1WI ET, CE-T1WI NCR, ED, T2WI ED), 2 image (CE-T1WI, T2WI). Accuracy area under curve of receiver-operating characteristics (AUC-ROC) calculated for dataset. performance was compared AlexNet, GoogleNet, VGG16, ResNet McNemar Delong test. Permutation importance (PI) Mann–Whitney U performed. Results accuracy 0.833 (95% confidence interval [95%CI]: 0.714–0.877) 0.840 (0.650–0.995) patient-based analysis. higher than VGG16 ResNet, AUC-ROC GoogleNet ( p <0.05). ED demonstrated highest (PI=0.239, 95%CI: 0.237–0.240) all other <0.0001). Conclusion is a competent deep learning model MGMT status. most dominant contribution.

Language: Английский

Citations

7

Assessment of MGMT promoter methylation status in glioblastoma using deep learning features from multi-sequence MRI of intratumoral and peritumoral regions DOI Creative Commons
Xuan Yu, Jing Zhou, Yaping Wu

et al.

Cancer Imaging, Journal Year: 2024, Volume and Issue: 24(1)

Published: Dec. 23, 2024

Abstract Objective This study aims to evaluate the effectiveness of deep learning features derived from multi-sequence magnetic resonance imaging (MRI) in determining O 6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation status among glioblastoma patients. Methods Clinical, pathological, and MRI data 356 patients (251 methylated, 105 unmethylated) were retrospectively examined public dataset The Cancer Imaging Archive. Each patient underwent preoperative brain scans, which included T1-weighted (T1WI) contrast-enhanced (CE-T1WI). Regions interest (ROIs) delineated identify necrotic tumor core (NCR), enhancing (ET), peritumoral edema (PED). ET NCR regions categorized as intratumoral ROIs, whereas PED region was ROIs. Predictive models developed using Transformer algorithm based on intratumoral, peritumoral, combined features. area under receiver operating characteristic curve (AUC) employed assess predictive performance. Results ROI-based regions, utilizing algorithms MRI, capable predicting MGMT model exhibited superior diagnostic performance relative individual models, achieving an AUC 0.923 (95% confidence interval [CI]: 0.890 – 0.948) stratified cross-validation, with sensitivity specificity 86.45% 87.62%, respectively. Conclusion can effectively distinguish between without methylation.

Language: Английский

Citations

4

Vision transformer with feature calibration and selective cross-attention for brain tumor classification DOI
Mohammad Ali Labbaf Khaniki,

Marzieh Mirzaeibonehkhater,

Mohammad Manthouri

et al.

Iran Journal of Computer Science, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 31, 2024

Language: Английский

Citations

3

The Role of Artificial Intelligence in Diagnostic Neurosurgery: A Systematic Review DOI
William Li, Armand Gumera, Shiv Surya

et al.

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: April 4, 2025

Abstract Background: Artificial intelligence (AI) is increasingly applied in diagnostic neurosurgery, enhancing precision and decision-making neuro-oncology, vascular, functional, spinal subspecialties. Despite its potential, variability outcomes necessitates a systematic review of performance applicability. Methods: A comprehensive search PubMed, Cochrane Library, Embase, CNKI, ClinicalTrials.gov was conducted from January 2020 to 2025. Inclusion criteria comprised studies utilizing AI for reporting quantitative metrics. Studies were excluded if they focused on non-human subjects, lacked clear metrics, or did not directly relate applications neurosurgery. Risk bias assessed using the PROBAST tool. This study registered PROSPERO, number CRD42025631040 26th, Results: Within 186 studies, neural networks (29%) hybrid models (49%) dominated. categorised into neuro-oncology (52.69%), vascular neurosurgery (19.89%), functional (16.67%), (11.83%). Median accuracies exceeded 85% most categories, with achieving high accuracy tumour detection, grading, segmentation. Vascular excelled stroke intracranial haemorrhage median AUC values 97%. Functional showed promising results, though sensitivity specificity underscores need standardised datasets validation. Discussion: The review’s limitations include lack data weighting, absence meta-analysis, limited collection timeframe, quality, risk some studies. Conclusion: AI shows potential improving across neurosurgical domains. Models used stroke, ICH, aneurysm conditions such as Parkinson’s disease epilepsy demonstrate results. However, sensitivity, specificity, further research model refinement ensure clinical viability effectiveness.

Language: Английский

Citations

0

Predicting isocitrate dehydrogenase status among adult patients with diffuse glioma using patient characteristics, radiomic features, and magnetic resonance imaging: Multi-modal analysis by variable vision transformer DOI Creative Commons
Takuma Usuzaki, Ryusei Inamori,

Takashi Shizukuishi

et al.

Magnetic Resonance Imaging, Journal Year: 2024, Volume and Issue: 111, P. 266 - 276

Published: May 29, 2024

To evaluate the performance of multimodal model, termed variable Vision Transformer (vViT), in task predicting isocitrate dehydrogenase (IDH) status among adult patients with diffuse glioma.

Language: Английский

Citations

2

Predicting EGFR Status After Radical Nephrectomy or Partial Nephrectomy for Renal Cell Carcinoma on CT Using a Self-attention-based Model: Variable Vision Transformer (vViT) DOI Creative Commons
Takuma Usuzaki, Ryusei Inamori, Mami Ishikuro

et al.

Deleted Journal, Journal Year: 2024, Volume and Issue: unknown

Published: June 28, 2024

To assess the effectiveness of vViT model for predicting postoperative renal function decline by leveraging clinical data, medical images, and image-derived features; to identify most dominant factor influencing this prediction.

Language: Английский

Citations

2

Vision Transformer–based Deep Learning Models Accelerate Further Research for Predicting Neurosurgical Intervention DOI
Kengo Takahashi, Takuma Usuzaki, Ryusei Inamori

et al.

Radiology Artificial Intelligence, Journal Year: 2024, Volume and Issue: 6(4)

Published: June 12, 2024

Language: Английский

Citations

1