PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer DOI Creative Commons
Baoqiang Ma, Jiapan Guo, Lisanne V. van Dijk

и другие.

Radiotherapy and Oncology, Год журнала: 2025, Номер unknown, С. 110852 - 110852

Опубликована: Март 1, 2025

In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers image-fusion strategies, could achieve comparable performance as SOTA models. The dataset comprises 489 oropharyngeal (OPC) from seven distinct centers. It was randomly divided into training (n = 369) an independent test 120). Furthermore, additional 400 OPC patients, who underwent chemo(radiotherapy) at our center, employed external testing. Each patients' data included pre-treatment CT- PET-scans, manually generated GTV (Gross tumour volume) contours primary tumors lymph nodes, RFP information. present compared against three developed on dataset. When inputting CT, early fusion (considering them different channels input) approach, DenseNet81 (with 81 layers) obtained internal C-index 0.69, metric Notably, removal input yielded same 0.69 while improving 0.59 to 0.63. PET-only models, when utilizing late (concatenation extracted features) PET, demonstrated superior values 0.68 0.66 both sets, better only set. basic architecture predictive par featuring more intricate architectures set, test. imaging

Язык: Английский

Transformers and large language models in healthcare: A review DOI Creative Commons
Subhash Nerella, Sabyasachi Bandyopadhyay, Jiaqing Zhang

и другие.

Artificial Intelligence in Medicine, Год журнала: 2024, Номер 154, С. 102900 - 102900

Опубликована: Июнь 5, 2024

With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption Transformers neural network architecture is rapidly changing many applications. Transformer a type deep learning initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in fields, healthcare. In this survey paper, we provide an overview how adopted analyze forms healthcare data, clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include articles that used transformer for generating surgical instructions predicting adverse outcomes after surgeries under umbrella critical care. Under diverse settings, these models diagnosis, report generation, data reconstruction, drug/protein synthesis. Finally, discuss benefits limitations using transformers examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, environmental impact.

Язык: Английский

Процитировано

19

Deep versus Handcrafted Tensor Radiomics Features: Prediction of Survival in Head and Neck Cancer Using Machine Learning and Fusion Techniques DOI Creative Commons
Mohammad R. Salmanpour, Seyed Masoud Rezaeijo, Mahdi Hosseinzadeh

и другие.

Diagnostics, Год журнала: 2023, Номер 13(10), С. 1696 - 1696

Опубликована: Май 11, 2023

Although handcrafted radiomics features (RF) are commonly extracted via software, employing deep (DF) from learning (DL) algorithms merits significant investigation. Moreover, a "tensor'' paradigm where various flavours of given feature generated and explored can provide added value. We aimed to employ conventional tensor DFs, compare their outcome prediction performance RFs.

Язык: Английский

Процитировано

43

Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT DOI
Vincent Andrearczyk, Valentin Oreiller,

Moamen Abobakr

и другие.

Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 1 - 30

Опубликована: Янв. 1, 2023

Язык: Английский

Процитировано

42

Auto-segmentation of head and neck tumors in positron emission tomography images using non-local means and morphological frameworks DOI Open Access

Sahel Heydarheydari,

Mohammad Javad Tahmasebi Birgani, Seyed Masoud Rezaeijo

и другие.

Polish Journal of Radiology, Год журнала: 2023, Номер 88, С. 365 - 370

Опубликована: Авг. 14, 2023

Accurately segmenting head and neck cancer (HNC) tumors in medical images is crucial for effective treatment planning. However, current methods HNC segmentation are limited their accuracy efficiency. The present study aimed to design a model three-dimensional (3D) positron emission tomography (PET) using Non-Local Means (NLM) morphological operations.The proposed was tested data from the HECKTOR challenge public dataset, which included 408 patient with tumors. NLM utilized image noise reduction preservation of critical information. Following pre-processing, operations were used assess similarity intensity edge information within images. Dice score, Intersection Over Union (IoU), evaluate manual predicted results.The achieved an average score 81.47 ± 3.15, IoU 80 4.5, 94.03 4.44, demonstrating its effectiveness PET images.The algorithm provides capability produce patient-specific tumor without interaction, addressing limitations segmentation. has potential improve planning aid development personalized medicine. Additionally, this can be extended effectively segment other organs annotated

Язык: Английский

Процитировано

32

Gradient Map-Assisted Head and Neck Tumor Segmentation: A Pre-RT to Mid-RT Approach in MRI-Guided Radiotherapy DOI Creative Commons
Jintao Ren, Kim Hochreuter, Mathis Rasmussen

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 36 - 49

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

1

PET/CT based transformer model for multi-outcome prediction in oropharyngeal cancer DOI
Baoqiang Ma, Jiapan Guo, Alessia de Biase

и другие.

Radiotherapy and Oncology, Год журнала: 2024, Номер 197, С. 110368 - 110368

Опубликована: Июнь 2, 2024

Язык: Английский

Процитировано

8

From Head and Neck Tumour and Lymph Node Segmentation to Survival Prediction on PET/CT: An End-to-End Framework Featuring Uncertainty, Fairness, and Multi-Region Multi-Modal Radiomics DOI Open Access
Zohaib Salahuddin, Yi Chen, Xian Zhong

и другие.

Cancers, Год журнала: 2023, Номер 15(7), С. 1932 - 1932

Опубликована: Март 23, 2023

Automatic delineation and detection of the primary tumour (GTVp) lymph nodes (GTVn) using PET CT in head neck cancer recurrence-free survival prediction can be useful for diagnosis patient risk stratification. We used data from nine different centres, with 524 359 cases training testing, respectively. utilised posterior sampling weight space proposed segmentation model to estimate uncertainty false positive reduction. explored prognostic potential radiomics features extracted predicted GTVp GTVn SHAP analysis explainability. evaluated bias models respect age, gender, chemotherapy, HPV status, lesion size. achieved an aggregate Dice score 0.774 0.760 on test set GTVn, observed a per image reduction 19.5% 7.14% threshold Radiomics both are most prognostic, our achieves C-index 0.672 set. Our framework incorporates estimation, fairness, explainability, demonstrating accurate

Язык: Английский

Процитировано

15

Multi-task deep learning-based radiomic nomogram for prognostic prediction in locoregionally advanced nasopharyngeal carcinoma DOI Creative Commons
Bingxin Gu, Mingyuan Meng, Mingzhen Xu

и другие.

European Journal of Nuclear Medicine and Molecular Imaging, Год журнала: 2023, Номер 50(13), С. 3996 - 4009

Опубликована: Авг. 19, 2023

Abstract Purpose Prognostic prediction is crucial to guide individual treatment for locoregionally advanced nasopharyngeal carcinoma (LA-NPC) patients. Recently, multi-task deep learning was explored joint prognostic and tumor segmentation in various cancers, resulting promising performance. This study aims evaluate the clinical value of LA-NPC Methods A total 886 patients acquired from two medical centers were enrolled including data, [ 18 F]FDG PET/CT images, follow-up progression-free survival (PFS). We adopted a model (DeepMTS) jointly perform (DeepMTS-Score) FDG-PET/CT images. The DeepMTS-derived masks leveraged extract handcrafted radiomics features, which also used (AutoRadio-Score). Finally, we developed learning-based radiomic (MTDLR) nomogram by integrating DeepMTS-Score, AutoRadio-Score, data. Harrell's concordance indices (C-index) time-independent receiver operating characteristic (ROC) analysis discriminative ability proposed MTDLR nomogram. For patient stratification, PFS rates high- low-risk calculated using Kaplan–Meier method compared with observed probability. Results Our achieved C-index 0.818 (95% confidence interval (CI): 0.785–0.851), 0.752 CI: 0.638–0.865), 0.717 0.641–0.793) area under curve (AUC) 0.859 0.822–0.895), 0.769 0.642–0.896), 0.730 0.634–0.826) training, internal validation, external validation cohorts, showed statistically significant improvement over conventional nomograms. divided into significantly different groups. Conclusion demonstrated that can reliable accurate patients, enabled better could facilitate personalized planning.

Язык: Английский

Процитировано

14

SwinCross: Cross‐modal Swin transformer for head‐and‐neck tumor segmentation in PET/CT images DOI Open Access

Gary Y. Li,

Junyu Chen, Se‐In Jang

и другие.

Medical Physics, Год журнала: 2023, Номер 51(3), С. 2096 - 2107

Опубликована: Сен. 30, 2023

Abstract Background Radiotherapy (RT) combined with cetuximab is the standard treatment for patients inoperable head and neck cancers. Segmentation of (H&N) tumors a prerequisite radiotherapy planning but time‐consuming process. In recent years, deep convolutional neural networks (DCNN) have become de facto automated image segmentation. However, due to expensive computational cost associated enlarging field view in DCNNs, their ability model long‐range dependency still limited, this can result sub‐optimal segmentation performance objects background context spanning over long distances. On other hand, Transformer models demonstrated excellent capabilities capturing such information several semantic tasks performed on medical images. Purpose Despite impressive representation capacity vision transformer models, current transformer‐based suffer from inconsistent incorrect dense predictions when fed multi‐modal input data. We suspect that power self‐attention mechanism may be limited extracting complementary exists To end, we propose novel model, debuted, Cross‐modal Swin (SwinCross), cross‐modal attention (CMA) module incorporate feature extraction at multiple resolutions. Methods architecture 3D two main components: (1) integrating modalities (PET CT), (2) shifted window block learning modalities. evaluate efficacy our approach, conducted experiments ablation studies HECKTOR 2021 challenge dataset. compared method against nnU‐Net (the backbone top‐5 methods 2021) state‐of‐the‐art including UNETR UNETR. The employed five‐fold cross‐validation setup using PET CT Results Empirical evidence demonstrates proposed consistently outperforms comparative techniques. This success attributed CMA module's enhance inter‐modality representations between during head‐and‐neck tumor Notably, SwinCross surpasses across all five folds, showcasing its proficiency varying resolutions through modules. Conclusions introduced automating delineation Our incorporates cross‐modality module, enabling exchange features experimental results establish superiority improved correlations Furthermore, methodology holds applicability involving different imaging like SPECT/CT or PET/MRI. Code: https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation

Язык: Английский

Процитировано

14

Evolving Horizons in Radiation Therapy Auto-Contouring: Distilling Insights, Embracing Data-Centric Frameworks, and Moving Beyond Geometric Quantification DOI Creative Commons
Kareem A. Wahid, Carlos Cárdenas, Barbara Marquez

и другие.

Advances in Radiation Oncology, Год журнала: 2024, Номер 9(7), С. 101521 - 101521

Опубликована: Апрель 21, 2024

Historically, clinician-derived contouring of tumors and healthy tissues has been crucial for radiotherapy (RT) planning. In recent years, advances in artificial intelligence (AI), predominantly deep learning (DL), have rapidly improved automated RT applications, particularly routine organs-at-risk 1–3. Despite research efforts actively promoting its broader acceptance, clinical adoption auto-contouring is not yet standard practice.

Язык: Английский

Процитировано

5