Weakly supervised large-scale pancreatic cancer detection using multi-instance learning DOI Creative Commons
Shyamapada Mandal,

Keerthiveena Balraj,

Hariprasad Kodamana

и другие.

Frontiers in Oncology, Год журнала: 2024, Номер 14

Опубликована: Авг. 29, 2024

Introduction Early detection of pancreatic cancer continues to be a challenge due the difficulty in accurately identifying specific signs or symptoms that might correlate with onset cancer. Unlike breast colon prostate where screening tests are often useful cancerous development, there no diagnose cancers. As result, most cancers diagnosed at an advanced stage, treatment options, whether systemic therapy, radiation, surgical interventions, offer limited efficacy. Methods A two-stage weakly supervised deep learning-based model has been proposed identify tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In first nnU-Net segmentation was used crop area location pancreas, which trained on MSKCC repository 281 patient image sets established tumors. second multi-instance classification applied cropped pancreas region segregate normal appearing pancreas. The trained, tested, validated obtained HFH 463 cases 2,882 controls. Results learning model, architecture, offers accuracy 0.907 ± 0.01, sensitivity 0.905 id="im2"> specificity 0.908 id="im3"> 0.02, AUC (ROC) 0.903 id="im4"> 0.01. framework can automatically differentiate tumor non-tumor improved dataset. Discussion architecture shows significantly enhanced performance for predicting presence CT compared other reported studies literature.

Язык: Английский

Balancing data consistency and diversity: Preprocessing and online data augmentation for multi-center deep learning-based MR-to-CT synthesis DOI

Songyue Han,

Cédric Hémon, Blanche Texier

и другие.

Pattern Recognition Letters, Год журнала: 2025, Номер 189, С. 56 - 63

Опубликована: Янв. 10, 2025

Язык: Английский

Процитировано

1

Indirect deformable image registration using synthetic image generated by unsupervised deep learning DOI Creative Commons
Cédric Hémon, Blanche Texier, Hilda Chourak

и другие.

Image and Vision Computing, Год журнала: 2024, Номер 148, С. 105143 - 105143

Опубликована: Июнь 21, 2024

ou non, émanant des établissements d'enseignement et de recherche français étrangers, laboratoires publics privés.

Процитировано

4

Synthetic Computed Tomography generation using deep-learning for female pelvic radiotherapy planning DOI Creative Commons

Rachael Tulip,

Sebastian Andersson,

Robert Chuter

и другие.

Physics and Imaging in Radiation Oncology, Год журнала: 2025, Номер 33, С. 100719 - 100719

Опубликована: Янв. 1, 2025

Synthetic Computed Tomography (sCT) is required to provide electron density information for MR-only radiotherapy. Deep-learning (DL) methods sCT generation show improved dose congruence over other (e.g. bulk density). Using 30 female pelvis datasets train a cycleGAN-inspired DL model, this study found mean differences between deformed planning CT (dCT) and were 0.2 % (D98 %). Three Dimensional Gamma analysis showed of 90.4 at 1 %/1mm. This accurate sCTs (dose) can be generated from routinely available T2 spin echo sequences without the need additional specialist sequences.

Язык: Английский

Процитировано

0

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis DOI Creative Commons
Ciro Benito Raggio,

Mathias Krohmer Zabaleta,

Nils Skupien

и другие.

Computers in Biology and Medicine, Год журнала: 2025, Номер 192, С. 110160 - 110160

Опубликована: Апрель 22, 2025

The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly the context Radiotherapy (RT) treatment planning. use sCT enables calculation doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with introduction MRI-Positron Emission (PET) hybrid scanners, derivation from MRI can improve attenuation correction PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse settings. creating centralised multi-centre datasets may pose privacy concerns. To address aforementioned issues, we introduced FedSynthCT-Brain, an approach based Federated Learning (FL) paradigm brain imaging. This is among first applications FL MRI-to-sCT, employing cross-silo horizontal that allows multiple centres collaboratively train U-Net-based deep model. We validated our method using real multicentre data four European and American centres, simulating heterogeneous scanner types acquisition modalities, tested its performance independent centre outside federation. In case unseen centre, federated model achieved median Mean Absolute Error (MAE) 102.0 HU across 23 patients, interquartile range 96.7-110.5 HU. (interquartile range) Structural Similarity Index (SSIM) Peak Signal Noise Ratio (PNSR) were 0.89 (0.86-0.89) 26.58 (25.52-27.42), respectively. analysis results showed acceptable performances approach, thus highlighting potential enhance generalisability advancing safe equitable while fostering collaboration preserving privacy.

Язык: Английский

Процитировано

0

3D Unsupervised deep learning method for magnetic resonance imaging-to-computed tomography synthesis in prostate radiotherapy DOI Creative Commons
Blanche Texier, Cédric Hémon,

Adélie Queffelec

и другие.

Physics and Imaging in Radiation Oncology, Год журнала: 2024, Номер 31, С. 100612 - 100612

Опубликована: Июль 1, 2024

Язык: Английский

Процитировано

1

ESTRO 2023 survey on the use of synthetic computed tomography for magnetic resonance Imaging-only radiotherapy: Current status and future steps DOI Creative Commons
M. Fusella,

Editha Andres,

Fernanda Villegas

и другие.

Physics and Imaging in Radiation Oncology, Год журнала: 2024, Номер 32, С. 100652 - 100652

Опубликована: Сен. 26, 2024

Язык: Английский

Процитировано

1

Weakly supervised large-scale pancreatic cancer detection using multi-instance learning DOI Creative Commons
Shyamapada Mandal,

Keerthiveena Balraj,

Hariprasad Kodamana

и другие.

Frontiers in Oncology, Год журнала: 2024, Номер 14

Опубликована: Авг. 29, 2024

Introduction Early detection of pancreatic cancer continues to be a challenge due the difficulty in accurately identifying specific signs or symptoms that might correlate with onset cancer. Unlike breast colon prostate where screening tests are often useful cancerous development, there no diagnose cancers. As result, most cancers diagnosed at an advanced stage, treatment options, whether systemic therapy, radiation, surgical interventions, offer limited efficacy. Methods A two-stage weakly supervised deep learning-based model has been proposed identify tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In first nnU-Net segmentation was used crop area location pancreas, which trained on MSKCC repository 281 patient image sets established tumors. second multi-instance classification applied cropped pancreas region segregate normal appearing pancreas. The trained, tested, validated obtained HFH 463 cases 2,882 controls. Results learning model, architecture, offers accuracy 0.907 ± 0.01, sensitivity 0.905 id="im2"> specificity 0.908 id="im3"> 0.02, AUC (ROC) 0.903 id="im4"> 0.01. framework can automatically differentiate tumor non-tumor improved dataset. Discussion architecture shows significantly enhanced performance for predicting presence CT compared other reported studies literature.

Язык: Английский

Процитировано

0