Color-aware fusion of nighttime infrared and visible images DOI
Jiaxin Yao, Yongqiang Zhao, Yuanyang Bu

и другие.

Engineering Applications of Artificial Intelligence, Год журнала: 2024, Номер 139, С. 109521 - 109521

Опубликована: Ноя. 4, 2024

Язык: Английский

Rethinking the approach to lightweight multi-branch heterogeneous image fusion frameworks: Infrared and visible image fusion via the parallel Mamba-KAN framework DOI
Guangkai Sun, Mingli Dong, Lianqing Zhu

и другие.

Optics & Laser Technology, Год журнала: 2025, Номер 185, С. 112612 - 112612

Опубликована: Фев. 17, 2025

Язык: Английский

Процитировано

0

Enhancing infrared and visible image fusion through multiscale Gaussian total variation and adaptive local entropy DOI
Hao Li,

Shengkun Wu,

Lei Deng

и другие.

The Visual Computer, Год журнала: 2025, Номер unknown

Опубликована: Март 11, 2025

Язык: Английский

Процитировано

0

MBHFuse: A multi- branch heterogeneous global and local infrared and visible image fusion with differential convolutional amplification features DOI
Yichen Sun, Mingli Dong, Mingxin Yu

и другие.

Optics & Laser Technology, Год журнала: 2024, Номер 181, С. 111666 - 111666

Опубликована: Авг. 27, 2024

Язык: Английский

Процитировано

2

ACDF-YOLO: Attentive and Cross-Differential Fusion Network for Multimodal Remote Sensing Object Detection DOI Creative Commons
Xuan Fei, Mengyao Guo, Yan Li

и другие.

Remote Sensing, Год журнала: 2024, Номер 16(18), С. 3532 - 3532

Опубликована: Сен. 23, 2024

Object detection in remote sensing images has received significant attention for a wide range of applications. However, traditional unimodal images, whether based on visible light or infrared, have limitations that cannot be ignored. Visible are susceptible to ambient lighting conditions, and their accuracy can greatly reduced. Infrared often lack rich texture information, resulting high false-detection rate during target identification classification. To address these challenges, we propose novel multimodal fusion network model, named ACDF-YOLO, basedon the lightweight efficient YOLOv5 structure, which aims amalgamate synergistic data from both infrared imagery, thereby enhancing efficiency imagery. Firstly, shuffle module is designed assist extracting features various modalities. Secondly, deeper information achieved by introducing new cross-modal difference fuse been acquired. Finally, combine two modules mentioned above an effective manner achieve ACDF. The ACDF not only enhances characterization ability fused but also further refines capture reinforcement important channel features. Experimental validation was performed using several publicly available real-world datasets. Compared with other advanced methods, ACDF-YOLO separately 95.87% 78.10% mAP0.5 LLVIP VEDAI datasets, demonstrating deep different modal effectively improve object detection.

Язык: Английский

Процитировано

2

A Novel Teacher-student Framework with Degradation Model for Infrared-Visible Image Fusion DOI
Weimin Xue, Yisha Liu, Fei Wang

и другие.

IEEE Transactions on Instrumentation and Measurement, Год журнала: 2024, Номер 73, С. 1 - 12

Опубликована: Янв. 1, 2024

The fusion performance of infrared and visible images depends on the quality source images, which are often affected by some factors in real-world scenarios, such as environmental changes, hardware limitations, image compression. influence these can be minimized training a neural network capable generating high-quality fused from low-quality images. However, conditions, it is challenging to acquire paired their corresponding for training. To address this issue, we propose novel teacher-student framework with degradation model infrared-visible fusion, namely TSDM-Fusion. In framework, teacher utilized generate RGB while employed Subsequently, obtained used train student network, enabling learn mapping design most important part simulates imaging processes including brightness, contrast, blur, noise, JPEG Experiments multiple public datasets M 3 FD detection demonstrate that our method not only enhance visual effects but also improve mAP. code pre-trained models available at https://github.com/bearxwm/TSDM-Fusion.

Язык: Английский

Процитировано

0

Fclfusion: A Frequency-Aware and Collaborative Learning for Infrared and Visible Image Fusion DOI

chengchao wang,

Yuanyuan Pu, Zhengpeng Zhao

и другие.

Опубликована: Янв. 1, 2024

Infrared and visible image fusion (IVIF) aims at integrate the advantages of difference modal image. Most existing deep learning-based methods often focus on single IVIF task ignore effect frequency information results, which not fully preserve salient structures important texture details. The core idea this paper is based following observation: (1) content can be characterized by different domain components, low base information, such as saliency structure, while detail located in high-frequency. (2) multi-tasks learning general achieve better performance than single-task. Based these observations, we propose a model with Frequency-aware Collaborative Learning (FCLFusion) from perspective. This takes main introduce reconstruction an auxiliary to collaboratively optimize network, thereby improves quality. Specifically, transform spatial features develops feature module for guiding primary network generate fused sub-network generates reconstructed images. Also, details via skip connections. Moreover, hybrid loss function that consisting two terms: self-supervised loss. former prevent domain, latter extraction vital information. Through comparative study, extensive experiments verified FCLFusion achieves superior three public datasets.

Язык: Английский

Процитировано

0

RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition DOI Open Access
Jia Yu, Gehao Lu, Jie Zhang

и другие.

Electronics, Год журнала: 2024, Номер 13(14), С. 2856 - 2856

Опубликована: Июль 19, 2024

Infrared image and visible fusion (IVIF) is a research direction that currently attracting much attention in the field of processing. The main goal to obtain fused by reasonably fusing infrared images images, while retaining advantageous features each source image. this aims improve quality, enhance target recognition ability, broaden application areas To advance area, we propose breakthrough method based on Residual Attention Network (RAN). By applying innovative network task fusion, mechanism residual can better capture critical background detail information significantly improving quality effectiveness fusion. Experimental results public domain datasets show our performs excellently multiple key metrics. For example, compared existing methods, improves standard deviation (SD) 35.26%, spatial frequency (SF) 109.85%, average gradient (AG) 96.93%, structural similarity (SSIM) 23.47%. These significant improvements validate superiority proposed open up new possibilities for enhancing performance adaptability networks.

Язык: Английский

Процитировано

0

FCLFusion: A frequency-aware and collaborative learning for infrared and visible image fusion DOI
Chengchao Wang, Yuanyuan Pu, Zhengpeng Zhao

и другие.

Engineering Applications of Artificial Intelligence, Год журнала: 2024, Номер 137, С. 109192 - 109192

Опубликована: Авг. 29, 2024

Язык: Английский

Процитировано

0

CMEFusion: Cross-Modal Enhancement and Fusion of FIR and Visible Images DOI
Xi Tong, Xing Luo, Jiangxin Yang

и другие.

IEEE Transactions on Computational Imaging, Год журнала: 2024, Номер 10, С. 1331 - 1345

Опубликована: Янв. 1, 2024

Язык: Английский

Процитировано

0

Color-aware fusion of nighttime infrared and visible images DOI
Jiaxin Yao, Yongqiang Zhao, Yuanyang Bu

и другие.

Engineering Applications of Artificial Intelligence, Год журнала: 2024, Номер 139, С. 109521 - 109521

Опубликована: Ноя. 4, 2024

Язык: Английский

Процитировано

0