An ISAR and Visible Image Fusion Algorithm Based on Adaptive Guided Multi-Layer Side Window Box Filter Decomposition DOI Creative Commons
Jiajia Zhang, Huan Li, Dong Zhao

и другие.

Remote Sensing, Год журнала: 2023, Номер 15(11), С. 2784 - 2784

Опубликована: Май 26, 2023

Traditional image fusion techniques generally use symmetrical methods to extract features from different sources of images. However, these conventional approaches do not resolve the information domain discrepancy multiple sources, resulting in incompleteness fusion. To solve problem, we propose an asymmetric decomposition method. Firstly, abundance discrimination method is used sort images into detailed and coarse categories. Then, are proposed at scales. Next, strategies adopted for scale features, including sum fusion, variance-based transformation, integrated energy-based Finally, result obtained through summation, retaining vital both Eight metrics two datasets containing registered visible, ISAR, infrared were evaluate performance The experimental results demonstrate that could preserve more details than symmetric one, performed better objective subjective evaluations compared with fifteen state-of-the-art methods. These findings can inspire researchers consider a new framework adapt differences richness images, promote development technology.

Язык: Английский

YOLO-CIR: The network based on YOLO and ConvNeXt for infrared object detection DOI
Jinjie Zhou, Baohui Zhang,

Xilin Yuan

и другие.

Infrared Physics & Technology, Год журнала: 2023, Номер 131, С. 104703 - 104703

Опубликована: Май 5, 2023

Язык: Английский

Процитировано

45

A review on infrared and visible image fusion algorithms based on neural networks DOI Creative Commons
Kaixuan Yang, Wei Xiang, Zhenshuai Chen

и другие.

Journal of Visual Communication and Image Representation, Год журнала: 2024, Номер 101, С. 104179 - 104179

Опубликована: Май 1, 2024

Infrared and visible image fusion represents a significant segment within the domain. The recent surge in processing hardware advancements, including GPUs, TPUs, cloud computing platforms, has facilitated of extensive datasets from multiple sensors. Given remarkable proficiency neural networks feature extraction fusion, their application infrared emerged as prominent research area years. This article begins by providing an overview current mainstream algorithms for based on networks, detailing principles various algorithms, representative works, respective advantages disadvantages. Subsequently, it introduces domain-relevant datasets, evaluation metrics, some typical scenarios. Finally, conducts qualitative quantitative evaluations results state-of-the-art offers future prospects experimental results.

Язык: Английский

Процитировано

12

Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects DOI Creative Commons

Yongyu Luo,

Zhongqiang Luo

Applied Sciences, Год журнала: 2023, Номер 13(19), С. 10891 - 10891

Опубликована: Сен. 30, 2023

Infrared and visible light image fusion combines infrared images by extracting the main information from each fusing it together to provide a more comprehensive with features two photos. has gained popularity in recent years is increasingly being employed sectors such as target recognition tracking, night vision, scene segmentation, others. In order concise overview of picture fusion, this paper first explores its historical context before outlining current domestic international research efforts. Then, conventional approaches for multi-scale decomposition method sparse representation method, are thoroughly introduced. The advancement deep learning greatly aided field fusion. outcomes have wide range potential applications due neural networks’ strong feature extraction reconstruction skills. As result, also evaluates techniques. After that, some common objective evaluation indexes provided, performance datasets areas sorted out at same time. Datasets play significant role an essential component testing. application many domains then simply studied practical examples, particularly developing fields, used show application. Finally, prospect presented, full text summarized.

Язык: Английский

Процитировано

17

MSCS: Multi-stage feature learning with channel-spatial attention mechanism for infrared and visible image fusion DOI
Zhenghua Huang, Biyun Xu, Menghan Xia

и другие.

Infrared Physics & Technology, Год журнала: 2024, Номер 142, С. 105514 - 105514

Опубликована: Авг. 23, 2024

Язык: Английский

Процитировано

5

End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach DOI Creative Commons
Francisco S. Neves, Rafael Marques Claro, Andry Maykol Pinto

и другие.

Sensors, Год журнала: 2023, Номер 23(5), С. 2434 - 2434

Опубликована: Фев. 22, 2023

A perception module is a vital component of modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices sensors for environmental awareness. Relying on singular sources information prone to be affected by specific conditions (e.g., visual cameras glary or dark environments). Thus, relying different an essential step introduce robustness against various conditions. Hence, system with sensor fusion capabilities produces desired redundant reliable awareness critical real-world systems. This paper proposes novel early that individual cases failure when detecting offshore maritime platform UAV landing. The model explores still unexplored combination visual, infrared, modalities. contribution described suggesting simple methodology intends facilitate training inference lightweight state-of-the-art object detector. based detector achieves solid detection recalls up 99% all extreme weather such as glary, dark, foggy scenarios in fair real-time duration below 6 ms.

Язык: Английский

Процитировано

10

MRASFusion: A multi-scale residual attention infrared and visible image fusion network based on semantic segmentation guidance DOI

Rongsheng An,

Gang Liu, Yao Qian

и другие.

Infrared Physics & Technology, Год журнала: 2024, Номер 139, С. 105343 - 105343

Опубликована: Май 8, 2024

Язык: Английский

Процитировано

4

SFPFusion: An Improved Vision Transformer Combining Super Feature Attention and Wavelet-Guided Pooling for Infrared and Visible Images Fusion DOI Creative Commons
Hui Li, Yongbiao Xiao, Chunyang Cheng

и другие.

Sensors, Год журнала: 2023, Номер 23(18), С. 7870 - 7870

Опубликована: Сен. 13, 2023

The infrared and visible image fusion task aims to generate a single that preserves complementary features reduces redundant information from different modalities. Although convolutional neural networks (CNNs) can effectively extract local obtain better performance, the size of receptive field limits its feature extraction ability. Thus, Transformer architecture has gradually become mainstream global features. However, current Transformer-based methods ignore enhancement details, which is important tasks other downstream vision tasks. To this end, new super attention mechanism wavelet-guided pooling operation are applied network form novel network, termed SFPFusion. Specifically, able establish long-range dependencies images fully extracted processed by multi-scale base enhance detail With powerful representation ability, only simple strategies utilized achieve performance. superiority our method compared with state-of-the-art demonstrated in qualitative quantitative experiments on multiple benchmarks.

Язык: Английский

Процитировано

9

Multi-Text Guidance Is Important: Multi-Modality Image Fusion via Large Generative Vision-Language Model DOI
Zeyu Wang, L. H. Zhao, Jizheng Zhang

и другие.

International Journal of Computer Vision, Год журнала: 2025, Номер unknown

Опубликована: Март 17, 2025

Процитировано

0

Medical image fusion model using CT and MRI images based on dual scale weighted fusion based residual attention network with encoder-decoder architecture DOI

G Poornima,

L. Anand

Biomedical Signal Processing and Control, Год журнала: 2025, Номер 108, С. 107932 - 107932

Опубликована: Май 1, 2025

Язык: Английский

Процитировано

0

Lightweight Infrared and Visible Image Fusion via Adaptive DenseNet with Knowledge Distillation DOI Open Access

Zongqing Zhao,

Shaojing Su,

Junyu Wei

и другие.

Electronics, Год журнала: 2023, Номер 12(13), С. 2773 - 2773

Опубликована: Июнь 22, 2023

The fusion of infrared and visible images produces a complementary image that captures both radiation information texture structure details using the respective sensors. However, current deep-learning-based approaches mainly tend to prioritize visual quality statistical metrics, leading an increased model complexity weight parameter sizes. To address these challenges, we propose novel dual-light approach adaptive DenseNet with knowledge distillation learn compress from pre-existing models, which achieves goals compression through use hyperparameters such as width depth network. effectiveness our proposed is evaluated on new dataset comprising three public datasets (MSRS, M3FD, LLVIP), qualitative quantitative experimental results show distillated effectively matches original models’ performance smaller parameters shorter inference times.

Язык: Английский

Процитировано

6