LRRNet: A Novel Representation Learning Guided Fusion Network for Infrared and Visible Images DOI
Hui Li, Tianyang Xu, Xiao‐Jun Wu

et al.

IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal Year: 2023, Volume and Issue: 45(9), P. 11040 - 11052

Published: April 19, 2023

Deep learning based fusion methods have been achieving promising performance in image tasks. This is attributed to the network architecture that plays a very important role process. However, general, it hard specify good architecture, and consequently, design of networks still black art, rather than science. To address this problem, we formulate task mathematically, establish connection between its optimal solution can implement it. approach leads novel method proposed paper constructing lightweight network. It avoids time-consuming empirical by trial-and-test strategy. In particular adopt learnable representation task, which construction guided optimisation algorithm producing model. The low-rank (LRR) objective foundation our matrix multiplications, are at heart transformed into convolutional operations, iterative process replaced special feed-forward Based on an end-to-end constructed fuse infrared visible light images. Its successful training facilitated detail-to-semantic information loss function preserve details enhance salient features source Our experiments show exhibits better state-of-the-art public datasets. Interestingly, requires fewer parameters other existing methods.

Language: Английский

DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion DOI
Jiayi Ma, Han Xu, Junjun Jiang

et al.

IEEE Transactions on Image Processing, Journal Year: 2020, Volume and Issue: 29, P. 4980 - 4995

Published: Jan. 1, 2020

In this paper, we proposed a new end-to-end model, termed as dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Our method establishes an game between generator two discriminators. The aims to generate real-like fused image based on specifically designed content loss fool the discriminators, while discriminators aim distinguish structure differences source images, respectively, in addition loss. Consequently, is forced simultaneously keep thermal radiation texture details image. Moreover, fuse resolutions, e.g., low-resolution high-resolution image, our DDcGAN constrains downsampled have similar property with This can avoid causing information blurring or detail loss, which typically happens traditional methods. addition, also apply multi-modality medical positron emission tomography magnetic resonance qualitative quantitative experiments publicly available datasets demonstrate superiority over state-of-the-art, terms both visual effect metrics. code at https://github.com/jiayi-ma/DDcGAN.

Language: Английский

Citations

859

Image Matching from Handcrafted to Deep Features: A Survey DOI Creative Commons
Jiayi Ma, Xingyu Jiang, Aoxiang Fan

et al.

International Journal of Computer Vision, Journal Year: 2020, Volume and Issue: 129(1), P. 23 - 79

Published: Aug. 4, 2020

Abstract As a fundamental and critical task in various visual applications, image matching can identify then correspond the same or similar structure/content from two more images. Over past decades, growing amount diversity of methods have been proposed for matching, particularly with development deep learning techniques over recent years. However, it may leave several open questions about which method would be suitable choice specific applications respect to different scenarios requirements how design better superior performance accuracy, robustness efficiency. This encourages us conduct comprehensive systematic review analysis those classical latest techniques. Following feature-based pipeline, we first introduce feature detection, description, handcrafted trainable ones provide an these theory practice. Secondly, briefly typical matching-based understanding significance matching. In addition, also objective comparison through extensive experiments on representative datasets. Finally, conclude current status technologies deliver insightful discussions prospects future works. survey serve as reference (but not limited to) researchers engineers related fields.

Language: Английский

Citations

749

RFN-Nest: An end-to-end residual fusion network for infrared and visible images DOI
Hui Li, Xiao‐Jun Wu, Josef Kittler

et al.

Information Fusion, Journal Year: 2021, Volume and Issue: 73, P. 72 - 86

Published: March 1, 2021

Language: Английский

Citations

614

Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network DOI
Linfeng Tang, Jiteng Yuan, Jiayi Ma

et al.

Information Fusion, Journal Year: 2022, Volume and Issue: 82, P. 28 - 42

Published: Jan. 1, 2022

Language: Английский

Citations

529

Image fusion meets deep learning: A survey and perspective DOI
Hao Zhang, Han Xu, Xin Tian

et al.

Information Fusion, Journal Year: 2021, Volume and Issue: 76, P. 323 - 336

Published: July 6, 2021

Language: Английский

Citations

501

MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion DOI
Hui Li, Xiao‐Jun Wu, Josef Kittler

et al.

IEEE Transactions on Image Processing, Journal Year: 2020, Volume and Issue: 29, P. 4733 - 4746

Published: Jan. 1, 2020

Image decomposition is crucial for many image processing tasks, as it allows to extract salient features from source images. A good method could lead a better performance, especially in fusion tasks. We propose multi-level based on latent low-rank representation(LatLRR), which called MDLatLRR. This applicable fields. In this paper, we focus the task. develop novel framework MDLatLRR, used decompose images into detail parts(salient features) and base parts. nuclear-norm strategy fuse parts, parts are fused by an averaging strategy. Compared with other state-of-the-art methods, proposed algorithm exhibits performance both subjective objective evaluation.

Language: Английский

Citations

451

PIAFusion: A progressive infrared and visible image fusion network based on illumination aware DOI
Linfeng Tang, Jiteng Yuan, Hao Zhang

et al.

Information Fusion, Journal Year: 2022, Volume and Issue: 83-84, P. 79 - 92

Published: March 29, 2022

Language: Английский

Citations

420

GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion DOI
Jiayi Ma, Hao Zhang, Zhenfeng Shao

et al.

IEEE Transactions on Instrumentation and Measurement, Journal Year: 2020, Volume and Issue: 70, P. 1 - 14

Published: Dec. 1, 2020

Visible images contain rich texture information, whereas infrared have significant contrast. It is advantageous to combine these two kinds of information into a single image so that it not only has good contrast but also contains details. In general, previous fusion methods cannot achieve this goal well, where the fused results are inclined either visible or an image. To address challenge, new framework called generative adversarial network with multiclassification constraints (GANMcC) proposed, which transforms multidistribution simultaneous estimation problem fuse and in more reasonable way. We adopt estimate distributions light domains at same time, game discrimination will make result balanced manner, as addition, we design specific content loss constrain generator, introduces idea main auxiliary extraction gradient intensity enable generator extract sufficient from source complementary manner. Extensive experiments demonstrate advantages our GANMcC over state-of-the-art terms both qualitative effect quantitative metric. Moreover, method can even overexposed. Our code publicly available https://github.com/jiayi-ma/GANMcC.

Language: Английский

Citations

401

Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion DOI
Jiayi Ma, Wei Yu, Chen Chen

et al.

Information Fusion, Journal Year: 2020, Volume and Issue: 62, P. 110 - 120

Published: May 1, 2020

Language: Английский

Citations

374

SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion DOI
Hao Zhang, Jiayi Ma

International Journal of Computer Vision, Journal Year: 2021, Volume and Issue: 129(10), P. 2761 - 2785

Published: July 30, 2021

Language: Английский

Citations

362