An ISAR and Visible Image Fusion Algorithm Based on Adaptive Guided Multi-Layer Side Window Box Filter Decomposition DOI Creative Commons
Jiajia Zhang, Huan Li, Dong Zhao

et al.

Remote Sensing, Journal Year: 2023, Volume and Issue: 15(11), P. 2784 - 2784

Published: May 26, 2023

Traditional image fusion techniques generally use symmetrical methods to extract features from different sources of images. However, these conventional approaches do not resolve the information domain discrepancy multiple sources, resulting in incompleteness fusion. To solve problem, we propose an asymmetric decomposition method. Firstly, abundance discrimination method is used sort images into detailed and coarse categories. Then, are proposed at scales. Next, strategies adopted for scale features, including sum fusion, variance-based transformation, integrated energy-based Finally, result obtained through summation, retaining vital both Eight metrics two datasets containing registered visible, ISAR, infrared were evaluate performance The experimental results demonstrate that could preserve more details than symmetric one, performed better objective subjective evaluations compared with fifteen state-of-the-art methods. These findings can inspire researchers consider a new framework adapt differences richness images, promote development technology.

Language: Английский

YOLO-CIR: The network based on YOLO and ConvNeXt for infrared object detection DOI
Jinjie Zhou, Baohui Zhang,

Xilin Yuan

et al.

Infrared Physics & Technology, Journal Year: 2023, Volume and Issue: 131, P. 104703 - 104703

Published: May 5, 2023

Language: Английский

Citations

45

A review on infrared and visible image fusion algorithms based on neural networks DOI Creative Commons
Kaixuan Yang, Xiang Wei, Zhenshuai Chen

et al.

Journal of Visual Communication and Image Representation, Journal Year: 2024, Volume and Issue: 101, P. 104179 - 104179

Published: May 1, 2024

Infrared and visible image fusion represents a significant segment within the domain. The recent surge in processing hardware advancements, including GPUs, TPUs, cloud computing platforms, has facilitated of extensive datasets from multiple sensors. Given remarkable proficiency neural networks feature extraction fusion, their application infrared emerged as prominent research area years. This article begins by providing an overview current mainstream algorithms for based on networks, detailing principles various algorithms, representative works, respective advantages disadvantages. Subsequently, it introduces domain-relevant datasets, evaluation metrics, some typical scenarios. Finally, conducts qualitative quantitative evaluations results state-of-the-art offers future prospects experimental results.

Language: Английский

Citations

9

Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects DOI Creative Commons

Yongyu Luo,

Zhongqiang Luo

Applied Sciences, Journal Year: 2023, Volume and Issue: 13(19), P. 10891 - 10891

Published: Sept. 30, 2023

Infrared and visible light image fusion combines infrared images by extracting the main information from each fusing it together to provide a more comprehensive with features two photos. has gained popularity in recent years is increasingly being employed sectors such as target recognition tracking, night vision, scene segmentation, others. In order concise overview of picture fusion, this paper first explores its historical context before outlining current domestic international research efforts. Then, conventional approaches for multi-scale decomposition method sparse representation method, are thoroughly introduced. The advancement deep learning greatly aided field fusion. outcomes have wide range potential applications due neural networks’ strong feature extraction reconstruction skills. As result, also evaluates techniques. After that, some common objective evaluation indexes provided, performance datasets areas sorted out at same time. Datasets play significant role an essential component testing. application many domains then simply studied practical examples, particularly developing fields, used show application. Finally, prospect presented, full text summarized.

Language: Английский

Citations

17

MRASFusion: A multi-scale residual attention infrared and visible image fusion network based on semantic segmentation guidance DOI

Rongsheng An,

Gang Liu, Yao Qian

et al.

Infrared Physics & Technology, Journal Year: 2024, Volume and Issue: 139, P. 105343 - 105343

Published: May 8, 2024

Language: Английский

Citations

4

MSCS: Multi-stage feature learning with channel-spatial attention mechanism for infrared and visible image fusion DOI
Zhenghua Huang, Biyun Xu, Menghan Xia

et al.

Infrared Physics & Technology, Journal Year: 2024, Volume and Issue: 142, P. 105514 - 105514

Published: Aug. 23, 2024

Language: Английский

Citations

4

Medical image fusion model using CT and MRI images based on dual scale weighted fusion based residual attention network with encoder-decoder architecture DOI

G Poornima,

L. Anand

Biomedical Signal Processing and Control, Journal Year: 2025, Volume and Issue: 108, P. 107932 - 107932

Published: May 1, 2025

Language: Английский

Citations

0

Multi-Text Guidance Is Important: Multi-Modality Image Fusion via Large Generative Vision-Language Model DOI
Zeyu Wang, L. H. Zhao, Jizheng Zhang

et al.

International Journal of Computer Vision, Journal Year: 2025, Volume and Issue: unknown

Published: March 17, 2025

Citations

0

End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach DOI Creative Commons
Francisco S. Neves, Rafael Marques Claro, Andry Maykol Pinto

et al.

Sensors, Journal Year: 2023, Volume and Issue: 23(5), P. 2434 - 2434

Published: Feb. 22, 2023

A perception module is a vital component of modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices sensors for environmental awareness. Relying on singular sources information prone to be affected by specific conditions (e.g., visual cameras glary or dark environments). Thus, relying different an essential step introduce robustness against various conditions. Hence, system with sensor fusion capabilities produces desired redundant reliable awareness critical real-world systems. This paper proposes novel early that individual cases failure when detecting offshore maritime platform UAV landing. The model explores still unexplored combination visual, infrared, modalities. contribution described suggesting simple methodology intends facilitate training inference lightweight state-of-the-art object detector. based detector achieves solid detection recalls up 99% all extreme weather such as glary, dark, foggy scenarios in fair real-time duration below 6 ms.

Language: Английский

Citations

9

SFPFusion: An Improved Vision Transformer Combining Super Feature Attention and Wavelet-Guided Pooling for Infrared and Visible Images Fusion DOI Creative Commons
Hui Li, Yongbiao Xiao, Chunyang Cheng

et al.

Sensors, Journal Year: 2023, Volume and Issue: 23(18), P. 7870 - 7870

Published: Sept. 13, 2023

The infrared and visible image fusion task aims to generate a single that preserves complementary features reduces redundant information from different modalities. Although convolutional neural networks (CNNs) can effectively extract local obtain better performance, the size of receptive field limits its feature extraction ability. Thus, Transformer architecture has gradually become mainstream global features. However, current Transformer-based methods ignore enhancement details, which is important tasks other downstream vision tasks. To this end, new super attention mechanism wavelet-guided pooling operation are applied network form novel network, termed SFPFusion. Specifically, able establish long-range dependencies images fully extracted processed by multi-scale base enhance detail With powerful representation ability, only simple strategies utilized achieve performance. superiority our method compared with state-of-the-art demonstrated in qualitative quantitative experiments on multiple benchmarks.

Language: Английский

Citations

9

GLFuse: A Global and Local Four-Branch Feature Extraction Network for Infrared and Visible Image Fusion DOI Creative Commons
Genping Zhao,

Zhuyong Hu,

Silu Feng

et al.

Remote Sensing, Journal Year: 2024, Volume and Issue: 16(17), P. 3246 - 3246

Published: Sept. 1, 2024

Infrared and visible image fusion integrates complementary information from different modalities into a single image, providing sufficient imaging for scene interpretation downstream target recognition tasks. However, existing methods often focus only on highlighting salient targets or preserving details, failing to effectively combine entire features during the process, resulting in underutilized poor overall effects. To address these challenges, global local four-branch feature extraction network (GLFuse) is proposed. On one hand, Super Token Transformer (STT) block, which capable of rapidly sampling predicting super tokens, utilized capture scene. other Detail Extraction Block (DEB) developed extract Additionally, two modules, namely Attention-based Feature Selection Fusion Module (ASFM) Dual Attention (DAFM), are designed facilitate selective modalities. Of more importance, various perceptual maps learned modality images at layers investigated design loss function better restore detail highlight by treating separately. Extensive experiments confirm that GLFuse exhibits excellent performance both subjective objective evaluations. It deserves note improves detection unified benchmark.

Language: Английский

Citations

2