Weakly supervised large-scale pancreatic cancer detection using multi-instance learning DOI Creative Commons
Shyamapada Mandal,

Keerthiveena Balraj,

Hariprasad Kodamana

et al.

Frontiers in Oncology, Journal Year: 2024, Volume and Issue: 14

Published: Aug. 29, 2024

Introduction Early detection of pancreatic cancer continues to be a challenge due the difficulty in accurately identifying specific signs or symptoms that might correlate with onset cancer. Unlike breast colon prostate where screening tests are often useful cancerous development, there no diagnose cancers. As result, most cancers diagnosed at an advanced stage, treatment options, whether systemic therapy, radiation, surgical interventions, offer limited efficacy. Methods A two-stage weakly supervised deep learning-based model has been proposed identify tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In first nnU-Net segmentation was used crop area location pancreas, which trained on MSKCC repository 281 patient image sets established tumors. second multi-instance classification applied cropped pancreas region segregate normal appearing pancreas. The trained, tested, validated obtained HFH 463 cases 2,882 controls. Results learning model, architecture, offers accuracy 0.907 ± 0.01, sensitivity 0.905 id="im2"> specificity 0.908 id="im3"> 0.02, AUC (ROC) 0.903 id="im4"> 0.01. framework can automatically differentiate tumor non-tumor improved dataset. Discussion architecture shows significantly enhanced performance for predicting presence CT compared other reported studies literature.

Language: Английский

Balancing data consistency and diversity: Preprocessing and online data augmentation for multi-center deep learning-based MR-to-CT synthesis DOI

Songyue Han,

Cédric Hemon, Blanche Texier

et al.

Pattern Recognition Letters, Journal Year: 2025, Volume and Issue: 189, P. 56 - 63

Published: Jan. 10, 2025

Language: Английский

Citations

1

Indirect deformable image registration using synthetic image generated by unsupervised deep learning DOI Creative Commons
Cédric Hemon, Blanche Texier, Hilda Chourak

et al.

Image and Vision Computing, Journal Year: 2024, Volume and Issue: 148, P. 105143 - 105143

Published: June 21, 2024

ou non, émanant des établissements d'enseignement et de recherche français étrangers, laboratoires publics privés.

Citations

4

Synthetic Computed Tomography generation using deep-learning for female pelvic radiotherapy planning DOI Creative Commons

Rachael Tulip,

Sebastian Andersson,

Robert Chuter

et al.

Physics and Imaging in Radiation Oncology, Journal Year: 2025, Volume and Issue: 33, P. 100719 - 100719

Published: Jan. 1, 2025

Synthetic Computed Tomography (sCT) is required to provide electron density information for MR-only radiotherapy. Deep-learning (DL) methods sCT generation show improved dose congruence over other (e.g. bulk density). Using 30 female pelvis datasets train a cycleGAN-inspired DL model, this study found mean differences between deformed planning CT (dCT) and were 0.2 % (D98 %). Three Dimensional Gamma analysis showed of 90.4 at 1 %/1mm. This accurate sCTs (dose) can be generated from routinely available T2 spin echo sequences without the need additional specialist sequences.

Language: Английский

Citations

0

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis DOI Creative Commons
Ciro Benito Raggio,

Mathias Krohmer Zabaleta,

Nils Skupien

et al.

Computers in Biology and Medicine, Journal Year: 2025, Volume and Issue: 192, P. 110160 - 110160

Published: April 22, 2025

The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly the context Radiotherapy (RT) treatment planning. use sCT enables calculation doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with introduction MRI-Positron Emission (PET) hybrid scanners, derivation from MRI can improve attenuation correction PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse settings. creating centralised multi-centre datasets may pose privacy concerns. To address aforementioned issues, we introduced FedSynthCT-Brain, an approach based Federated Learning (FL) paradigm brain imaging. This is among first applications FL MRI-to-sCT, employing cross-silo horizontal that allows multiple centres collaboratively train U-Net-based deep model. We validated our method using real multicentre data four European and American centres, simulating heterogeneous scanner types acquisition modalities, tested its performance independent centre outside federation. In case unseen centre, federated model achieved median Mean Absolute Error (MAE) 102.0 HU across 23 patients, interquartile range 96.7-110.5 HU. (interquartile range) Structural Similarity Index (SSIM) Peak Signal Noise Ratio (PNSR) were 0.89 (0.86-0.89) 26.58 (25.52-27.42), respectively. analysis results showed acceptable performances approach, thus highlighting potential enhance generalisability advancing safe equitable while fostering collaboration preserving privacy.

Language: Английский

Citations

0

ESTRO 2023 survey on the use of synthetic computed tomography for magnetic resonance Imaging-only radiotherapy: Current status and future steps DOI Creative Commons
M. Fusella,

Editha Andres,

Fernanda Villegas

et al.

Physics and Imaging in Radiation Oncology, Journal Year: 2024, Volume and Issue: 32, P. 100652 - 100652

Published: Sept. 26, 2024

Language: Английский

Citations

1

3D Unsupervised deep learning method for magnetic resonance imaging-to-computed tomography synthesis in prostate radiotherapy DOI Creative Commons
Blanche Texier, Cédric Hemon,

Adélie Queffelec

et al.

Physics and Imaging in Radiation Oncology, Journal Year: 2024, Volume and Issue: 31, P. 100612 - 100612

Published: July 1, 2024

Language: Английский

Citations

1

Weakly supervised large-scale pancreatic cancer detection using multi-instance learning DOI Creative Commons
Shyamapada Mandal,

Keerthiveena Balraj,

Hariprasad Kodamana

et al.

Frontiers in Oncology, Journal Year: 2024, Volume and Issue: 14

Published: Aug. 29, 2024

Introduction Early detection of pancreatic cancer continues to be a challenge due the difficulty in accurately identifying specific signs or symptoms that might correlate with onset cancer. Unlike breast colon prostate where screening tests are often useful cancerous development, there no diagnose cancers. As result, most cancers diagnosed at an advanced stage, treatment options, whether systemic therapy, radiation, surgical interventions, offer limited efficacy. Methods A two-stage weakly supervised deep learning-based model has been proposed identify tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In first nnU-Net segmentation was used crop area location pancreas, which trained on MSKCC repository 281 patient image sets established tumors. second multi-instance classification applied cropped pancreas region segregate normal appearing pancreas. The trained, tested, validated obtained HFH 463 cases 2,882 controls. Results learning model, architecture, offers accuracy 0.907 ± 0.01, sensitivity 0.905 id="im2"> specificity 0.908 id="im3"> 0.02, AUC (ROC) 0.903 id="im4"> 0.01. framework can automatically differentiate tumor non-tumor improved dataset. Discussion architecture shows significantly enhanced performance for predicting presence CT compared other reported studies literature.

Language: Английский

Citations

0