DSAFusion: Detail-semantic-aware network for infrared and low-light visible image fusion DOI
Menghan Xia, Cheng-Hui Lin, Biyun Xu

et al.

Infrared Physics & Technology, Journal Year: 2025, Volume and Issue: unknown, P. 105804 - 105804

Published: March 1, 2025

Language: Английский

Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain DOI Creative Commons
Liangliang Li, Yan Shi,

Ming Lv

et al.

Remote Sensing, Journal Year: 2024, Volume and Issue: 16(20), P. 3804 - 3804

Published: Oct. 13, 2024

The fusion of infrared and visible images together can fully leverage the respective advantages each, providing a more comprehensive richer set information. This is applicable in various fields such as military surveillance, night navigation, environmental monitoring, etc. In this paper, novel image method based on sparse representation guided filtering Laplacian pyramid (LP) domain introduced. source are decomposed into low- high-frequency bands by LP, respectively. Sparse has achieved significant effectiveness fusion, it used to process low-frequency band; excellent edge-preserving effects effectively maintain spatial continuity band. Therefore, combined with weighted sum eight-neighborhood-based modified (WSEML) bands. Finally, inverse LP transform reconstruct fused image. We conducted simulation experiments publicly available TNO dataset validate superiority our proposed algorithm fusing images. Our preserves both thermal radiation characteristics detailed features

Language: Английский

Citations

10

EH-former: Regional easy-hard-aware transformer for breast lesion segmentation in ultrasound images DOI
Xiaolei Qu, Jiale Zhou, Jue Jiang

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 109, P. 102430 - 102430

Published: April 18, 2024

Language: Английский

Citations

9

LSCANet: Differential features guided long-short cross attention network for infrared and visible image fusion DOI
Baofeng Guo,

Hongtao Huo,

Xiaowen Liu

et al.

Signal Processing, Journal Year: 2025, Volume and Issue: 231, P. 109889 - 109889

Published: Jan. 9, 2025

Language: Английский

Citations

1

SSDFusion: A scene-semantic decomposition approach for visible and infrared image fusion DOI
Rui Ming,

Yuze Xiao,

Xinyu Liu

et al.

Pattern Recognition, Journal Year: 2025, Volume and Issue: unknown, P. 111457 - 111457

Published: Feb. 1, 2025

Language: Английский

Citations

1

MaeFuse: Transferring Omni Features with Pretrained Masked Autoencoders for Infrared and Visible Image Fusion via Guided Training DOI
Jiayang Li, Junjun Jiang, Pengwei Liang

et al.

IEEE Transactions on Image Processing, Journal Year: 2025, Volume and Issue: 34, P. 1340 - 1353

Published: Jan. 1, 2025

In this paper, we introduce MaeFuse, a novel autoencoder model designed for Infrared and Visible Image Fusion (IVIF). The existing approaches image fusion often rely on training combined with downstream tasks to obtain high-level visual information, which is effective in emphasizing target objects delivering impressive results quality task-specific applications. Instead of being driven by tasks, our called MaeFuse utilizes pretrained encoder from Masked Autoencoders (MAE), facilities the omni features extraction low-level reconstruction vision perception friendly low cost. order eliminate domain gap different modal block effect caused MAE encoder, further develop guided strategy. This strategy meticulously crafted ensure that layer seamlessly adjusts feature space gradually enhancing performance. proposed method can facilitate comprehensive integration vectors both infrared visible modalities, thus preserving rich details inherent each modal. not only introduces perspective realm techniques but also stands out performance across various public datasets. code available at https://github.com/Henry-Lee-real/MaeFuse.

Language: Английский

Citations

1

AMLCA: Additive multi-layer convolution-guided cross-attention network for visible and infrared image fusion DOI
Dongliang Wang, Chuang Huang, Hao Pan

et al.

Pattern Recognition, Journal Year: 2025, Volume and Issue: unknown, P. 111468 - 111468

Published: Feb. 1, 2025

Language: Английский

Citations

1

A novel integrative multimodal classifier to enhance the diagnosis of Parkinson’s disease DOI Creative Commons
Xiaoyan Zhou, Luca Parisi, Wentao Huang

et al.

Briefings in Bioinformatics, Journal Year: 2025, Volume and Issue: 26(2)

Published: March 1, 2025

Abstract Parkinson’s disease (PD) is a complex, progressive neurodegenerative disorder with high heterogeneity, making early diagnosis difficult. Early detection and intervention are crucial for slowing PD progression. Understanding PD’s diverse pathways mechanisms key to advancing knowledge. Recent advances in noninvasive imaging multi-omics technologies have provided valuable insights into underlying causes biological processes. However, integrating these data sources remains challenging, especially when deriving meaningful low-level features that can serve as diagnostic indicators. This study developed validated novel integrative, multimodal predictive model detecting based on derived from data, including hematological information, proteomics, RNA sequencing, metabolomics, dopamine transporter scan imaging, sourced the Progression Markers Initiative. Several architectures were investigated evaluated, support vector machine, eXtreme Gradient Boosting, fully connected neural networks concatenation joint modeling (FCNN_C FCNN_JM), encoder-based multi-head cross-attention (MMT_CA). The MMT_CA demonstrated superior performance, achieving balanced classification accuracy of 97.7%, thus highlighting its ability capture leverage cross-modality inter-dependencies aid analytics. Furthermore, feature importance analysis using SHapley Additive exPlanations not only identified biomarkers inform models this but also holds potential future research aimed at integrated functional analyses perspective, ultimately revealing targets required precision medicine approaches treatment down

Language: Английский

Citations

1

Integrating sensor fusion with machine learning for comprehensive assessment of phenotypic traits and drought response in poplar species DOI Creative Commons
Ziyang Zhou, Huichun Zhang, Liming Bian

et al.

Plant Biotechnology Journal, Journal Year: 2025, Volume and Issue: unknown

Published: March 30, 2025

Summary Increased drought frequency and severity in a warming climate threaten the health stability of forest ecosystems, influencing structure functioning forests while having far‐reaching implications for global carbon storage regulation. To effectively address challenges posed by drought, it is imperative to monitor assess degree stress trees timely accurate manner. In this study, gradient experiment was conducted with poplar as research object, multimodal data were collected subsequent analysis. A machine learning‐based monitoring model constructed, thereby enabling duration trees. Four processing methods, namely decomposition, layer fusion, feature fusion decision employed comprehensively evaluate monitoring. Additionally, potential new phenotypic features obtained different methods discussed. The results demonstrate that optimal learning model, constructed under exhibits best performance, average accuracy, precision, recall F1 score reaching 0.85, 0.86, 0.85 respectively. Conversely, novel derived through decomposition supplementary did not further augment precision. This indicates approach has clear advantages offers robust theoretical foundation practical guidance future tree response assessment.

Language: Английский

Citations

1

Twins Transformer: Rolling Bearing Fault Diagnosis based on Cross-attention Fusion of Time and Frequency Domain Features DOI
Zhikang Gao, Yanxue Wang, Xinming Li

et al.

Measurement Science and Technology, Journal Year: 2024, Volume and Issue: 35(9), P. 096113 - 096113

Published: June 4, 2024

Abstract Current self-attention based Transformer models in the field of fault diagnosis are limited to identifying correlation information within a single sequence and unable capture both time frequency domain characteristics original signal. To address these limitations, this research introduces two-channel model that integrates features through cross-attention mechanism. Initially, time-domain signal is converted using Fast Fourier Transform, followed by global local feature extraction via Convolutional Neural Network. Next, mechanism on Transformer, separate associated with long distances each modeled then fed into fusion module During process, serve as query Q key-value pairs K. By calculating attention weights between K, excavates deeper Besides preserving intrinsic associative sequences learned mechanism, Twins also degree association different Finally, proposed model’s performance was validated four experiments bearing datasets, achieving average accuracy rates 99.67%, 98.76%, 98.47% 99.41%. These results confirm effective features, demonstrating fast convergence, superior high accuracy.

Language: Английский

Citations

6

Deep evidential fusion with uncertainty quantification and reliability learning for multimodal medical image segmentation DOI
Ling Huang, Su Ruan, Pierre Decazes

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 113, P. 102648 - 102648

Published: Aug. 23, 2024

Language: Английский

Citations

6