An ISAR and Visible Image Fusion Algorithm Based on Adaptive Guided Multi-Layer Side Window Box Filter Decomposition DOI Creative Commons
Jiajia Zhang, Huan Li, Dong Zhao

et al.

Remote Sensing, Journal Year: 2023, Volume and Issue: 15(11), P. 2784 - 2784

Published: May 26, 2023

Traditional image fusion techniques generally use symmetrical methods to extract features from different sources of images. However, these conventional approaches do not resolve the information domain discrepancy multiple sources, resulting in incompleteness fusion. To solve problem, we propose an asymmetric decomposition method. Firstly, abundance discrimination method is used sort images into detailed and coarse categories. Then, are proposed at scales. Next, strategies adopted for scale features, including sum fusion, variance-based transformation, integrated energy-based Finally, result obtained through summation, retaining vital both Eight metrics two datasets containing registered visible, ISAR, infrared were evaluate performance The experimental results demonstrate that could preserve more details than symmetric one, performed better objective subjective evaluations compared with fifteen state-of-the-art methods. These findings can inspire researchers consider a new framework adapt differences richness images, promote development technology.

Language: Английский

Lightweight Infrared and Visible Image Fusion via Adaptive DenseNet with Knowledge Distillation DOI Open Access

Zongqing Zhao,

Shaojing Su,

Junyu Wei

et al.

Electronics, Journal Year: 2023, Volume and Issue: 12(13), P. 2773 - 2773

Published: June 22, 2023

The fusion of infrared and visible images produces a complementary image that captures both radiation information texture structure details using the respective sensors. However, current deep-learning-based approaches mainly tend to prioritize visual quality statistical metrics, leading an increased model complexity weight parameter sizes. To address these challenges, we propose novel dual-light approach adaptive DenseNet with knowledge distillation learn compress from pre-existing models, which achieves goals compression through use hyperparameters such as width depth network. effectiveness our proposed is evaluated on new dataset comprising three public datasets (MSRS, M3FD, LLVIP), qualitative quantitative experimental results show distillated effectively matches original models’ performance smaller parameters shorter inference times.

Language: Английский

Citations

6

FDNet: An end-to-end fusion decomposition network for infrared and visible images DOI Creative Commons
Jing Di, Ren Li, Jizhao Liu

et al.

PLoS ONE, Journal Year: 2023, Volume and Issue: 18(9), P. e0290231 - e0290231

Published: Sept. 18, 2023

Infrared and visible image fusion can generate a with clear texture prominent goals under extreme conditions. This capability is important for all-day climate detection other tasks. However, most existing methods extracting features from infrared images are based on convolutional neural networks (CNNs). These often fail to make full use of the salient objects in raw image, leading problems such as insufficient details low contrast fused images. To this end, we propose an unsupervised end-to-end Fusion Decomposition Network (FDNet) fusion. Firstly, construct network that extracts gradient intensity information images, using multi-scale layers, depthwise separable convolution, improved convolution block attention module (I-CBAM). Secondly, FDNet feature extraction, loss designed accordingly. Intensity adopts Frobenius norm adjust weighing values between two select more effective information. The introduces adaptive weight determines optimized objective richness at pixel scale, ultimately guiding abundant Finally, design single dual channel layer decomposition network, which keeps decomposed possible input forcing contain richer detail Compared various representative methods, our proposed method not only has good subjective vision, but also achieves advanced performance evaluation.

Language: Английский

Citations

4

红外与可见光图像融合:统计分析,深度学习方法和未来展望 DOI

吴一非 Wu Yifei,

杨瑞 Yang Rui,

吕其深 Qishen Lü

et al.

Laser & Optoelectronics Progress, Journal Year: 2024, Volume and Issue: 61(14), P. 1400004 - 1400004

Published: Jan. 1, 2024

Citations

1

ADF‐Net: Attention‐guided deep feature decomposition network for infrared and visible image fusion DOI Creative Commons
Sen Shen, Taotao Zhang,

Haidi Dong

et al.

IET Image Processing, Journal Year: 2024, Volume and Issue: 18(10), P. 2774 - 2787

Published: May 23, 2024

Abstract To effectively enhance the ability to acquire information by making full use of complementary features infrared and visible images, widely used image fusion algorithm is faced with challenges such as loss blurring. In response this issue, authors propose a dual‐branch deep hierarchical network (ADF‐Net) guided an attention mechanism. Initially, convolution module extracts shallow image. Subsequently, decomposition feature extractor introduced, where in transformer encoder block (TEB) employs remote process low‐frequency global features, while CNN (CEB) high‐frequency local information. Ultimately, layer based on TEB CEB produce fused through encoder. Multiple experiments demonstrate that ADF‐Net excels various aspects utilizing two‐stage training appropriate function for testing.

Language: Английский

Citations

1

YOFIR: High precise infrared object detection algorithm based on YOLO and FasterNet DOI

Mi Wen,

Chenyang Li, Yi Xue

et al.

Infrared Physics & Technology, Journal Year: 2024, Volume and Issue: unknown, P. 105627 - 105627

Published: Nov. 1, 2024

Language: Английский

Citations

1

CMRFusion: A cross-domain multi-resolution fusion method for infrared and visible image fusion DOI
Zhang Xiong,

Yuanjia Cao,

Xiaohui Zhang

et al.

Optics and Lasers in Engineering, Journal Year: 2023, Volume and Issue: 170, P. 107765 - 107765

Published: July 31, 2023

Language: Английский

Citations

3

AFSFusion: An Adjacent Feature Shuffle Combination Network for Infrared and Visible Image Fusion DOI Creative Commons

Yufeng Hu,

Shaoping Xu,

Wei-Hua Lin

et al.

Applied Sciences, Journal Year: 2023, Volume and Issue: 13(9), P. 5640 - 5640

Published: May 3, 2023

To obtain fused images with excellent contrast, distinct target edges, and well-preserved details, we propose an adaptive image fusion network called the adjacent feature shuffle-fusion (AFSFusion). The proposed adopts a UNet-like architecture incorporates key refinements to enhance loss functions. Regarding architecture, two-branch module, AFSF, expands number of channels fuse several convolutional layers in first half AFSFusion, enhancing its ability extract, transmit, modulate information. We replace original rectified linear unit (ReLU) leaky ReLU alleviate problem gradient disappearance add channel shuffling operation at end AFSF facilitate information interaction capability between features. Concerning functions, weight adjustment (AWA) strategy assign values corresponding pixels infrared (IR) visible images, according VGG16 response IR images. This efficiently handles different scene contents. After normalization, are used as weighting coefficients for two sets applied three items simultaneously: mean square error (MSE), structural similarity (SSIM), total variation (TV), resulting clearer objects richer texture detail conducted series experiments on benchmark databases, results demonstrate effectiveness superiority compared other state-of-the-art methods. It ranks objective metrics, showing best performance exhibiting sharper edges specific targets, which is more line human visual perception. remarkable enhancement ascribed module AWA strategy, enabling balanced extraction, fusion, modulation features throughout process.

Language: Английский

Citations

2

Rectification for Stitched Images with Deformable Meshes and Residual Networks DOI Creative Commons
Yingbo Fan, Shanjun Mao, Mei Li

et al.

Applied Sciences, Journal Year: 2024, Volume and Issue: 14(7), P. 2821 - 2821

Published: March 27, 2024

Image stitching is an important method for digital image processing, which often prone to the problem of irregularity stitched images after stitching. And traditional cropping or complementation methods usually lead a large number information loss. Therefore, this paper proposes rectification based on deformable mesh and residual network. The aims minimize loss at edges spliced inside image. Specifically, can select most suitable shape network regression according different images. Its function includes global local loss, aiming within grid target. in not only greatly reduces caused by irregular shapes stitching, but also adapts with various rigid structures. Meanwhile, its validation DIR-D dataset shows that outperforms state-of-the-art rectification.

Language: Английский

Citations

0

MGFA : A multi-scale global feature autoencoder to fuse infrared and visible images DOI
Xiaoxuan Chen, Shuwen Xu,

Shaohai Hu

et al.

Signal Processing Image Communication, Journal Year: 2024, Volume and Issue: 128, P. 117168 - 117168

Published: July 14, 2024

Citations

0

UIRGBfuse: Revisiting infrared and visible image fusion from the unified fusion of infrared channel with R, G, and B channels DOI
Yi Shi,

Guo Si,

Mengting Chen

et al.

Infrared Physics & Technology, Journal Year: 2024, Volume and Issue: 143, P. 105626 - 105626

Published: Nov. 15, 2024

Language: Английский

Citations

0