Sparse Multi-Modal Graph Transformer with Shared-Context Processing for Representation Learning of Giga-pixel Images DOI
Ramin Nakhli,

Puria Azadi Moghadam,

Hao‐Yang Mi

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2023, Volume and Issue: unknown, P. 11547 - 11557

Published: June 1, 2023

Processing giga-pixel whole slide histopathology images (WSI) is a computationally expensive task. Multiple instance learning (MIL) has become the conventional approach to process WSIs, in which these are split into smaller patches for further processing. However, MIL-based techniques ignore explicit information about individual cells within patch. In this paper, by defining novel concept of shared-context processing, we designed multi-modal Graph Transformer (AMIGO) that uses cellular graph tissue provide single representation patient while taking advantage hierarchical structure tissue, enabling dynamic focus between cell-level and tissue-level information. We benchmarked performance our model against multiple state-of-the-art methods survival prediction showed ours can significantly outperform all them including Vision (ViT). More importantly, show strongly robust missing an extent it achieve same with as low 20% data. Finally, two different cancer datasets, demonstrated was able stratify patients low-risk high-risk groups other failed goal. also publish large dataset immunohistochemistry (InUIT) containing 1,600 microarray (TMA) cores from 188 along their information, making one largest publicly available datasets context.

Language: Английский

Artificial intelligence for multimodal data integration in oncology DOI Creative Commons
Jana Lipková, Richard J. Chen, Bowen Chen

et al.

Cancer Cell, Journal Year: 2022, Volume and Issue: 40(10), P. 1095 - 1110

Published: Oct. 1, 2022

In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in realm single modality, neglecting broader clinical context, which inevitably diminishes their potential. Integration different data modalities provides opportunities increase robustness accuracy diagnostic prognostic models, bringing AI closer practice. are also capable discovering novel patterns within across suitable for explaining differences outcomes or treatment resistance. The insights gleaned such can guide exploration studies contribute discovery biomarkers therapeutic targets. To support these advances, here we present synopsis methods strategies multimodal fusion association discovery. We outline approaches interpretability directions AI-driven through interconnections. examine challenges adoption discuss emerging solutions.

Language: Английский

Citations

334

Multimodal deep learning for biomedical data fusion: a review DOI Creative Commons
Sören Richard Stahlschmidt, Benjamin Ulfenborg, Jane Synnergren

et al.

Briefings in Bioinformatics, Journal Year: 2021, Volume and Issue: 23(2)

Published: Dec. 14, 2021

Biomedical data are becoming increasingly multimodal and thereby capture the underlying complex relationships among biological processes. Deep learning (DL)-based fusion strategies a popular approach for modeling these nonlinear relationships. Therefore, we review current state-of-the-art of such methods propose detailed taxonomy that facilitates more informed choices biomedical applications, as well research on novel methods. By doing so, find deep often outperform unimodal shallow approaches. Additionally, proposed subcategories show different advantages drawbacks. The has shown that, especially intermediate strategies, joint representation is preferred it effectively models interactions levels organization. Finally, note gradual fusion, based prior knowledge or search promising future path. Similarly, utilizing transfer might overcome sample size limitations sets. As sets become available, DL approaches present opportunity to train holistic can learn regulatory dynamics behind health disease.

Language: Английский

Citations

329

Pan-cancer integrative histology-genomic analysis via multimodal deep learning DOI Creative Commons
Richard J. Chen, Ming Y. Lu, Drew F. K. Williamson

et al.

Cancer Cell, Journal Year: 2022, Volume and Issue: 40(8), P. 865 - 878.e6

Published: Aug. 1, 2022

The rapidly emerging field of computational pathology has demonstrated promise in developing objective prognostic models from histology images. However, most are either based on or genomics alone and do not address how these data sources can be integrated to develop joint image-omic models. Additionally, identifying explainable morphological molecular descriptors that govern such prognosis is interest. We use multimodal deep learning jointly examine whole-slide images profile 14 cancer types. Our weakly supervised, deep-learning algorithm able fuse heterogeneous modalities predict outcomes discover features correlate with poor favorable outcomes. present all analyses for correlates patient across the types at both a disease level an interactive open-access database allow further exploration, biomarker discovery, feature assessment.

Language: Английский

Citations

292

Multimodal Deep Learning DOI

Amirreza Shaban,

Safoora Yousefi

Springer optimization and its applications, Journal Year: 2024, Volume and Issue: unknown, P. 209 - 219

Published: Jan. 1, 2024

Language: Английский

Citations

202

A survey of multimodal information fusion for smart healthcare: Mapping the journey from data to wisdom DOI Creative Commons
Thanveer Shaik, Xiaohui Tao, Lin Li

et al.

Information Fusion, Journal Year: 2023, Volume and Issue: 102, P. 102040 - 102040

Published: Sept. 27, 2023

Multimodal medical data fusion has emerged as a transformative approach in smart healthcare, enabling comprehensive understanding of patient health and personalized treatment plans. In this paper, journey from to information knowledge wisdom (DIKW) is explored through multimodal for healthcare. We present review focused on the integration various modalities. The explores different approaches such feature selection, rule-based systems, machine ;earning, deep learning, natural language processing, fusing analyzing data. This paper also highlights challenges associated with By synthesizing reviewed frameworks theories, it proposes generic framework that aligns DIKW model. Moreover, discusses future directions related four pillars healthcare: Predictive, Preventive, Personalized, Participatory approaches. components survey presented form foundation more successful implementation Our findings can guide researchers practitioners leveraging power state-of-the-art revolutionize healthcare improve outcomes.

Language: Английский

Citations

54

Deep learning for survival analysis: a review DOI Creative Commons

Simon Wiegrebe,

Philipp Kopper, Raphael Sonabend

et al.

Artificial Intelligence Review, Journal Year: 2024, Volume and Issue: 57(3)

Published: Feb. 19, 2024

Abstract The influx of deep learning (DL) techniques into the field survival analysis in recent years has led to substantial methodological progress; for instance, from unstructured or high-dimensional data such as images, text omics data. In this work, we conduct a comprehensive systematic review DL-based methods time-to-event analysis, characterizing them according both survival- and DL-related attributes. summary, reviewed often address only small subset tasks relevant data—e.g., single-risk right-censored data—and neglect incorporate more complex settings. Our findings are summarized an editable, open-source, interactive table: https://survival-org.github.io/DL4Survival . As research area is advancing rapidly, encourage community contribution order keep database up date.

Language: Английский

Citations

51

Autosurv: interpretable deep learning framework for cancer survival analysis incorporating clinical and multi-omics data DOI Creative Commons
Lindong Jiang, Chao Xu, Yuntong Bai

et al.

npj Precision Oncology, Journal Year: 2024, Volume and Issue: 8(1)

Published: Jan. 5, 2024

Accurate prognosis for cancer patients can provide critical information optimizing treatment plans and improving life quality. Combining omics data demographic/clinical offer a more comprehensive view of than using or clinical alone also reveal the underlying disease mechanisms at molecular level. In this study, we developed validated deep learning framework to extract from high-dimensional gene expression miRNA conduct prediction breast ovarian-cancer multiple independent multi-omics datasets. Our model achieved significantly better current machine approaches in various settings. Moreover, an interpretation method was applied tackle "black-box" nature neural networks identified features (i.e., genes, miRNA, variables) that were important distinguish predicted high- low-risk patients. The significance partially supported by previous studies.

Language: Английский

Citations

22

CustOmics: A versatile deep-learning based strategy for multi-omics integration DOI Creative Commons
Hakim Benkirane, Yoann Pradat, Stefan Michiels

et al.

PLoS Computational Biology, Journal Year: 2023, Volume and Issue: 19(3), P. e1010921 - e1010921

Published: March 6, 2023

The availability of patient cohorts with several types omics data opens new perspectives for exploring the disease’s underlying biological processes and developing predictive models. It also comes challenges in computational biology terms integrating high-dimensional heterogeneous a fashion that captures interrelationships between multiple genes their functions. Deep learning methods offer promising multi-omics data. In this paper, we review existing integration strategies based on autoencoders propose customizable one whose principle relies two-phase approach. first phase, adapt training to each source independently before cross-modality interactions second phase. By taking into account source’s singularity, show approach succeeds at advantage all sources more efficiently than other strategies. Moreover, by adapting our architecture computation Shapley additive explanations, model can provide interpretable results multi-source setting. Using from different TCGA cohorts, demonstrate performance proposed method cancer test cases tasks, such as classification tumor breast subtypes, well survival outcome prediction. We through experiments great performances seven datasets various sizes some interpretations obtained. Our code is available ( https://github.com/HakimBenkirane/CustOmics ).

Language: Английский

Citations

36

Machine and Deep Learning for Tuberculosis Detection on Chest X-Rays: Systematic Literature Review DOI Creative Commons
Seng Hansun, Ahmadreza Argha, Siaw‐Teng Liaw

et al.

Journal of Medical Internet Research, Journal Year: 2023, Volume and Issue: 25, P. e43154 - e43154

Published: July 3, 2023

Background Tuberculosis (TB) was the leading infectious cause of mortality globally prior to COVID-19 and chest radiography has an important role in detection, subsequent diagnosis, patients with this disease. The conventional experts reading substantial within- between-observer variability, indicating poor reliability human readers. Substantial efforts have been made utilizing various artificial intelligence–based algorithms address limitations radiographs for diagnosing TB. Objective This systematic literature review (SLR) aims assess performance machine learning (ML) deep (DL) detection TB using (chest x-ray [CXR]). Methods In conducting reporting SLR, we followed PRISMA (Preferred Reporting Items Systematic Reviews Meta-Analyses) guidelines. A total 309 records were identified from Scopus, PubMed, IEEE (Institute Electrical Electronics Engineers) databases. We independently screened, reviewed, assessed all available included 47 studies that met inclusion criteria SLR. also performed risk bias assessment Quality Assessment Diagnostic Accuracy Studies version 2 (QUADAS-2) meta-analysis 10 provided confusion matrix results. Results Various CXR data sets used studies, most popular ones being Montgomery County (n=29) Shenzhen (n=36) sets. DL (n=34) more commonly than ML (n=7) studies. Most radiologist’s report as reference standard. Support vector (n=5), k-nearest neighbors (n=3), random forest (n=2) approaches. Meanwhile, convolutional neural networks techniques, 4 applications ResNet-50 (n=11), VGG-16 (n=8), VGG-19 (n=7), AlexNet (n=6). Four metrics popularly used, namely, accuracy (n=35), area under curve (AUC; n=34), sensitivity (n=27), specificity (n=23). terms results, showed higher (mean ~93.71%) ~92.55%), while on average models achieved better AUC ~92.12%) ~91.54%). Based estimated pooled methods be 0.9857 (95% CI 0.9477-1.00) 0.9805 0.9255-1.00), respectively. From assessment, 17 regarded having unclear risks standard aspect 6 flow timing aspect. Only had built based proposed solutions. Conclusions Findings SLR confirm high potential both CXR. Future need pay a close attention aspects bias, aspects. Trial Registration PROSPERO CRD42021277155; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=277155

Language: Английский

Citations

27

Cross-Modal Translation and Alignment for Survival Analysis DOI
Fengtao Zhou, Hao Chen

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Journal Year: 2023, Volume and Issue: unknown, P. 21428 - 21437

Published: Oct. 1, 2023

With the rapid advances in high-throughput sequencing technologies, focus of survival analysis has shifted from examining clinical indicators to incorporating genomic profiles with pathological images. However, existing methods either directly adopt a straightforward fusion features and for prediction, or take as guidance integrate The former would overlook intrinsic cross-modal correlations. latter discard information irrelevant gene expression. To address these issues, we present Cross-Modal Translation Alignment (CMTA) framework explore correlations transfer potential complementary information. Specifically, construct two parallel encoder-decoder structures multi-modal data intra-modal generate representation. Taking generated representation enhance recalibrate can significantly improve its discrimination comprehensive analysis. correlations, further design attention module bridge between different modalities perform interactions Our extensive experiments on five public TCGA datasets demonstrate that our proposed outperforms state-of-the-art methods. source code been released .

Language: Английский

Citations

24