Evaluating the potential of Distribution of Relaxation Times analysis for plant agriculture DOI
Maxime Van Haeverbeke, Bernard De Baets, Michiel Stock

et al.

Computers and Electronics in Agriculture, Journal Year: 2023, Volume and Issue: 213, P. 108249 - 108249

Published: Sept. 22, 2023

Language: Английский

Multimodal data fusion for cancer biomarker discovery with deep learning DOI
Sandra Steyaert,

Marija Pizurica,

Divya Nagaraj

et al.

Nature Machine Intelligence, Journal Year: 2023, Volume and Issue: 5(4), P. 351 - 362

Published: April 6, 2023

Language: Английский

Citations

142

Artificial intelligence-based methods for fusion of electronic health records and imaging data DOI Creative Commons
Farida Mohsen, Hazrat Ali, Nady El Hajj

et al.

Scientific Reports, Journal Year: 2022, Volume and Issue: 12(1)

Published: Oct. 26, 2022

Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal sources contributes to a better understanding of human provides optimal personalized healthcare. The most important question when using is how fuse them-a field growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion different modalities provide insights. To this end, scoping review, we focus on synthesizing analyzing literature that uses AI techniques for clinical applications. More specifically, studies only fused EHR with imaging develop various methods We present comprehensive analysis strategies, diseases outcomes which was used, ML algorithms used perform each application, available datasets. followed PRISMA-ScR (Preferred Reporting Items Systematic Reviews Meta-Analyses Extension Scoping Reviews) guidelines. searched Embase, PubMed, Scopus, Google Scholar retrieve relevant studies. After pre-processing screening, extracted from 34 fulfilled inclusion criteria. found fusing increasing doubling 2020 2021. In our analysis, typical workflow observed: feeding raw data, by applying conventional (ML) or deep (DL) algorithms, finally, evaluating through outcome predictions. Specifically, early technique applications (22 out studies). multimodality models outperformed traditional single-modality same task. Disease diagnosis prediction were common (reported 20 10 studies, respectively) perspective. Neurological disorders dominant category (16 From an perspective, (19 studies), DL Multimodal included mostly private repositories (21 Through offer new insights researchers interested knowing current state knowledge within research field.

Language: Английский

Citations

97

Deep multimodal fusion of image and non-image data in disease diagnosis and prognosis: a review DOI Creative Commons
Can Cui,

Haichun Yang,

Yaohong Wang

et al.

Progress in Biomedical Engineering, Journal Year: 2023, Volume and Issue: 5(2), P. 022001 - 022001

Published: March 9, 2023

Abstract The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, personalized diagnosis treatment planning a single cancer patient relies on various images (e.g. radiology, pathology camera images) non-image clinical genomic data). However, such decision-making procedures can be subjective, qualitative, have large inter-subject variabilities. With recent advances multimodal deep learning technologies, an increasingly number efforts been devoted key question: how do we extract aggregate information ultimately provide more objective, quantitative computer-aided decision making? This paper reviews studies dealing with question. Briefly, this review will include (a) overview current workflows, (b) summarization fusion methods, (c) discussion performance, (d) applications disease prognosis, (e) challenges future directions.

Language: Английский

Citations

85

Effective Techniques for Multimodal Data Fusion: A Comparative Analysis DOI Creative Commons
Maciej Pawłowski, Anna Wròblewska, Sylwia Sysko-Romańczuk

et al.

Sensors, Journal Year: 2023, Volume and Issue: 23(5), P. 2381 - 2381

Published: Feb. 21, 2023

Data processing in robotics is currently challenged by the effective building of multimodal and common representations. Tremendous volumes raw data are available their smart management core concept learning a new paradigm for fusion. Although several techniques representations have been proven successful, they not yet analyzed compared given production setting. This paper explored three most techniques, (1) late fusion, (2) early (3) sketch, them classification tasks. Our different types (modalities) that could be gathered sensors serving wide range sensor applications. experiments were conducted on Amazon Reviews, MovieLens25M, Movie-Lens1M datasets. Their outcomes allowed us to confirm choice fusion technique representation crucial obtain highest possible model performance resulting from proper modality combination. Consequently, we designed criteria choosing this optimal technique.

Language: Английский

Citations

49

Deep learning based multimodal biomedical data fusion: An overview and comparative review DOI
Junwei Duan,

Jiaqi Xiong,

Yinghui Li

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 112, P. 102536 - 102536

Published: Dec. 1, 2024

Language: Английский

Citations

44

Hybrid multimodal wearable sensors for comprehensive health monitoring DOI
Kuldeep Mahato, Tamoghna Saha, Shichao Ding

et al.

Nature Electronics, Journal Year: 2024, Volume and Issue: unknown

Published: Sept. 23, 2024

Language: Английский

Citations

41

Multimodal Machine Learning Guides Low Carbon Aeration Strategies in Urban Wastewater Treatment DOI Creative Commons
Hongcheng Wang, Yuqi Wang, Xu Wang

et al.

Engineering, Journal Year: 2024, Volume and Issue: 36, P. 51 - 62

Published: Feb. 9, 2024

The potential for reducing greenhouse gas (GHG) emissions and energy consumption in wastewater treatment can be realized through intelligent control, with machine learning (ML) multimodality emerging as a promising solution. Here, we introduce an ML technique based on multimodal strategies, focusing specifically aeration control plants (WWTPs). generalization of the strategy is demonstrated eight models. results demonstrate that this significantly enhances model indicators environmental science efficiency exhibiting exceptional performance interpretability. Integrating random forest visual models achieves highest accuracy forecasting quantity models, mean absolute percentage error 4.4% coefficient determination 0.948. Practical testing full-scale plant reveals reduce operation costs by 19.8% compared to traditional fuzzy methods. application these strategies critical water domains discussed. To foster accessibility promote widespread adoption, are freely available GitHub, thereby eliminating technical barriers encouraging artificial intelligence urban treatment.

Language: Английский

Citations

30

A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification DOI Creative Commons
Olaide N. Oyelade,

Eric Aghiomesi Irunokhai,

Hui Wang

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Jan. 6, 2024

Abstract There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis some chronic diseases such as breast cancer often require multimodal data streams different modalities visual and textual content. Mammography, magnetic resonance imaging (MRI) image-guided biopsy represent few considered by physicians in isolating cases cancer. Unfortunately, most studies applying techniques solving problems digital images have narrowed their study samples. This understood considering the challenging nature abnormality where fusion high dimension heterogeneous features learned needs be projected into common representation space. paper presents novel approach combining dual/twin convolutional neural network (TwinCNN) framework address challenge from multi-modalities. First, modality-based feature was achieved extracting both low levels using networks embedded TwinCNN. Secondly, notorious problem dimensionality associated extracted features, binary optimization method adapted effectively eliminate non-discriminant search Furthermore, for applied computationally leverage ground-truth predicted labels each sample enable multimodality classification. To evaluate proposed method, mammography histopathology samples benchmark datasets namely MIAS BreakHis respectively. Experimental results obtained showed that area under curve (AUC) single yielded 0.755 0.861871 histology, 0.791 0.638 mammography. investigated resulting fused result 0.977, 0.913, 0.667 mammography, The findings confirmed based on combination label improves performance. In addition, contribution shows reduction optimizer supports elimination capable bottle-necking classifier.

Language: Английский

Citations

20

A review of cancer data fusion methods based on deep learning DOI
Yuxin Zhao, Xiaobo Li, Changjun Zhou

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 108, P. 102361 - 102361

Published: March 20, 2024

Language: Английский

Citations

20

Deep Learning Models for Diagnosis of Schizophrenia Using EEG Signals: Emerging Trends, Challenges, and Prospects DOI
Rakesh Ranjan, Bikash Chandra Sahana, Ashish Kumar Bhandari

et al.

Archives of Computational Methods in Engineering, Journal Year: 2024, Volume and Issue: 31(4), P. 2345 - 2384

Published: Jan. 6, 2024

Language: Английский

Citations

19