Deep learning in MRI‐guided radiation therapy: A systematic review DOI Creative Commons

Zach Eidex,

Yifu Ding, Jing Wang

et al.

Journal of Applied Clinical Medical Physics, Journal Year: 2023, Volume and Issue: 25(2)

Published: Sept. 15, 2023

MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. precise, reviewed with emphasis placed on underlying methods. Studies further categorized into areas segmentation, synthesis, radiomics, real time MRI. Finally, clinical implications, current challenges, future directions discussed.

Language: Английский

Deep learning based synthesis of MRI, CT and PET: Review and analysis DOI Creative Commons
Sanuwani Dayarathna, Kh Tohidul Islam, Sergio Uribe

et al.

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 92, P. 103046 - 103046

Published: Dec. 1, 2023

Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple modalities for an accurate workflow. This approach proves beneficial estimating desired modality from given source among most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission (PET). However, translating between two presents difficulties due complex non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance synthetic contrast applications compared conventional methods. survey comprehensively reviews deep translation 2018 2023 on pseudo-CT, MR, PET. We provide overview contrasts frequently employed learning networks synthesis. Additionally, we conduct detailed analysis each method, focusing their diverse model designs based input domains network architectures. also analyse novel architectures, ranging CNNs recent Transformer Diffusion models. includes comparing loss functions, available datasets anatomical regions, quality assessments other downstream tasks. Finally, discuss identify solutions within literature, suggesting possible future directions. hope that insights offered this paper will serve valuable roadmap researchers field

Language: Английский

Citations

54

Synthetic CT generation from MRI using 3D transformer‐based denoising diffusion model DOI
Shaoyan Pan,

Elham Abouei,

Jacob Wynne

et al.

Medical Physics, Journal Year: 2023, Volume and Issue: 51(4), P. 2538 - 2548

Published: Nov. 27, 2023

Abstract Background and purpose Magnetic resonance imaging (MRI)‐based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning by eliminating the need for CT simulation error‐prone image registration, ultimately reducing patient dose setup uncertainty. In this work, we propose a MRI‐to‐CT transformer‐based improved denoising diffusion probabilistic model (MC‐IDDPM) to translate MRI into high‐quality sCT facilitate planning. Methods MC‐IDDPM implements processes with shifted‐window transformer network generate from MRI. The proposed consists of two processes: forward process, which involves adding Gaussian noise real scans create noisy images, reverse in V‐net (Swin‐Vnet) denoises conditioned on same produce noise‐free scans. With an optimally trained Swin‐Vnet, process was used matching anatomy. We evaluated method generating institutional brain dataset prostate dataset. Quantitative evaluations were conducted using several metrics, including Mean Absolute Error (MAE), Peak Signal‐to‐Noise Ratio (PSNR), Multi‐scale Structure Similarity Index (SSIM), Normalized Cross Correlation (NCC). Dosimetry analyses also performed, comparisons mean target coverages 95% 99%. Results generated sCTs state‐of‐the‐art quantitative results MAE 48.825 ± 21.491 HU, PSNR 26.491 2.814 dB, SSIM 0.947 0.032, NCC 0.976 0.019. For dataset: 55.124 9.414 28.708 2.112 0.878 0.040, 0.940 0.039. demonstrates statistically significant improvement (with p < 0.05) most metrics when compared competing networks, both CT. indicated that coverage differences within 0.34%. Conclusions have developed validated novel approach images routine MRIs DDPM. This effectively captures complex relationship between allowing robust be matter minutes. has potential greatly simplify additional scans, amount time patients spend planning, enhancing accuracy delivery.

Language: Английский

Citations

47

Transformer-based Generative Adversarial Networks in Computer Vision: A Comprehensive Survey DOI
Shiv Ram Dubey, Satish Kumar Singh

IEEE Transactions on Artificial Intelligence, Journal Year: 2024, Volume and Issue: 5(10), P. 4851 - 4867

Published: May 24, 2024

Generative Adversarial Networks (GANs) have been very successful for synthesizing the images in a given dataset. The artificially generated by GANs are realistic. shown potential usability several computer vision applications, including image generation, image-to-image translation, video synthesis, etc. Conventionally, generator network is backbone of GANs, which generates samples and discriminator used to facilitate training network. networks usually Convolutional Neural Network (CNN). convolution-based exploit local relationship layer, requires deep extract abstract features. However, recently developed Transformer able global with tremendous performance improvement problems vision. Motivated from success recent works tried Transformers GAN framework image/video synthesis. This paper presents comprehensive survey on developments advancements utilizing applications. comparison applications benchmark datasets also performed analyzed. conducted will be useful understand research trends & gaps related Transformer-based develop advanced architectures exploiting relationships different

Language: Английский

Citations

24

Generative Adversarial Networks (GANs) in Medical Imaging: Advancements, Applications, and Challenges DOI Creative Commons
Showrov Islam, M Aziz, Hadiur Rahman Nabil

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 35728 - 35753

Published: Jan. 1, 2024

Generative Adversarial Networks are a class of artificial intelligence algorithms that consist generator and discriminator trained simultaneously through adversarial training. GANs have found crucial applications in various fields, including medical imaging. In healthcare, contribute by generating synthetic images, enhancing data quality, aiding image segmentation, disease detection, synthesis. Their importance lies their ability to generate realistic facilitating improved diagnostics, research, training for professionals. Understanding its applications, algorithms, current advancements, challenges is imperative further advancement the imaging domain. However, no study explores recent state-of-the-art development To overcome this research gap, extensive study, we began exploring vast array imaging, scrutinizing them within research. We then dive into prevalent datasets pre-processing techniques enhance comprehension. Subsequently, an in-depth discussion GAN elucidating respective strengths limitations, provided. After that, meticulously analyzed results experimental details some cutting-edge obtain more comprehensive understanding Lastly, discussed diverse encountered future directions mitigate these concerns. This systematic review offers complete overview encompassing application domains, models, analysis, challenges, directions, serving as valuable resource multidisciplinary studies.

Language: Английский

Citations

18

Generative AI for synthetic data across multiple medical modalities: A systematic review of recent developments and challenges DOI Creative Commons
Mahmoud K. Ibrahim, Yasmina Al Khalil, Sina Amirrajab

et al.

Computers in Biology and Medicine, Journal Year: 2025, Volume and Issue: 189, P. 109834 - 109834

Published: March 1, 2025

This paper presents a comprehensive systematic review of generative models (GANs, VAEs, DMs, and LLMs) used to synthesize various medical data types, including imaging (dermoscopic, mammographic, ultrasound, CT, MRI, X-ray), text, time-series, tabular (EHR). Unlike previous narrowly focused reviews, our study encompasses broad array modalities explores models. Our aim is offer insights into their current future applications in research, particularly the context synthesis applications, generation techniques, evaluation methods, as well providing GitHub repository dynamic resource for ongoing collaboration innovation. search strategy queries databases such Scopus, PubMed, ArXiv, focusing on recent works from January 2021 November 2023, excluding reviews perspectives. period emphasizes advancements beyond GANs, which have been extensively covered reviews. The survey also aspect conditional generation, not similar work. Key contributions include broad, multi-modality scope that identifies cross-modality opportunities unavailable single-modality surveys. While core techniques are transferable, we find methods often lack sufficient integration patient-specific context, clinical knowledge, modality-specific requirements tailored unique characteristics data. Conditional leveraging textual conditioning multimodal remain underexplored but promising directions findings structured around three themes: (1) Synthesis highlighting clinically valid significant gaps using synthetic augmentation, validation evaluation; (2) Generation identifying personalization innovation; (3) Evaluation revealing absence standardized benchmarks, need large-scale validation, importance privacy-aware, relevant frameworks. These emphasize benchmarking comparative studies promote openness collaboration.

Language: Английский

Citations

5

Abdominal synthetic CT generation for MR-only radiotherapy using structure-conserving loss and transformer-based cycle-GAN DOI Creative Commons
C. Lee,

Young Hun Yoon,

Jiwon Sung

et al.

Frontiers in Oncology, Journal Year: 2025, Volume and Issue: 14

Published: Jan. 3, 2025

Recent deep-learning based synthetic computed tomography (sCT) generation using magnetic resonance (MR) images have shown promising results. However, generating sCT for the abdominal region poses challenges due to patient motion, including respiration and peristalsis. To address these challenges, this study investigated an unsupervised learning approach a transformer-based cycle-GAN with structure-preserving loss cancer patients. A total of 120 T2 MR scanned by 1.5 T Unity MR-Linac their corresponding CT were collected. Patient data aligned rigid registration. The employed architecture, incorporating modified Swin-UNETR as generator. Modality-independent neighborhood descriptor (MIND) was used geometric consistency. Image quality compared between planning CT, metrics mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structure similarity index measure (SSIM) Kullback-Leibler (KL) divergence. Dosimetric evaluation evaluated gamma analysis relative dose volume histogram differences each organ-at-risks, utilizing treatment plan. comparison conducted original, Swin-UNETR-only, MIND-only, proposed cycle-GAN. MAE, PSNR, SSIM KL divergence original method 86.1 HU, 26.48 dB, 0.828, 0.448 79.52 27.05 0.845, 0.230, respectively. MAE PSNR statistically significant. global passing rates at 1%/1 mm, 2%/2 3%/3 mm ± 5.9%, 97.1 2.7%, 98.9 1.0%, significantly improves image metric abdomen patients than Local slightly higher method. This showed improvement transformer preserving even complex anatomy abdomen.

Language: Английский

Citations

2

Artificial intelligence generated content (AIGC) in medicine: A narrative review DOI Creative Commons
Liangjing Shao, Benshuang Chen, Ziqun Zhang

et al.

Mathematical Biosciences & Engineering, Journal Year: 2024, Volume and Issue: 21(1), P. 1672 - 1711

Published: Jan. 1, 2024

Recently, artificial intelligence generated content (AIGC) has been receiving increased attention and is growing exponentially. AIGC based on the intentional information extracted from human-provided instructions by generative (AI) models. quickly automatically generates large amounts of high-quality content. Currently, there a shortage medical resources complex procedures in medicine. Due to its characteristics, can help alleviate these problems. As result, application medicine gained recent years. Therefore, this paper provides comprehensive review state studies involving First, we present an overview AIGC. Furthermore, studies, reviewed two aspects: image processing text generation. The basic AI models, tasks, target organs, datasets contribution are considered summarized. Finally, also discuss limitations challenges faced propose possible solutions with relevant studies. We hope readers understand potential obtain some innovative ideas field.

Language: Английский

Citations

12

Vision transformer: To discover the “four secrets” of image patches DOI
Tao Zhou, Yuxia Niu, Huiling Lu

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 105, P. 102248 - 102248

Published: Jan. 11, 2024

Language: Английский

Citations

10

Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy DOI Creative Commons
Fernanda Villegas,

Riccardo Dal Bello,

Emilie Alvarez-Andres

et al.

Radiotherapy and Oncology, Journal Year: 2024, Volume and Issue: 198, P. 110387 - 110387

Published: June 15, 2024

Language: Английский

Citations

10

Breast ultrasound image classification and physiological assessment based on GoogLeNet DOI Creative Commons
Shaohua Chen, Yan‐Ling Wu,

Canyu Pan

et al.

Journal of Radiation Research and Applied Sciences, Journal Year: 2023, Volume and Issue: 16(3), P. 100628 - 100628

Published: July 20, 2023

Medical ultrasound image classification based on convolutional neural network is the mainstream breast cancer model, but its limited perceptual ability limits to obtain global information. A total of 880 images were collected from 700 patients including 103 normal images, 467 malignant tumor and 210 benign images. In this paper, diagnosis was realized by constructing CNN model GoogLeNet. Firstly, preprocessed TV model. After that, trained a more accurate with wider range application obtained improved Inception. Then we extract features different sizes; Then, feature completed in classifiers realize detection cancer. Meanwhile, comparative analysis performed verify excellence GoogLeNet The training time for effectively reduced, accuracy rate improved, reaching 96.37% combined transformer learning. loss value down 0.3492. structural models discussed two models. results show that has great advantages influence migration experimental further discussed. Finally, transfer learning, three tested separately. learning can improve system performance. Based paper designs method combination Experiments repair part texture damaged markers ultrasonic thyroid nodule, accurately judge whether diseased, which greatly improves diagnostic efficiency doctors.

Language: Английский

Citations

20