Computer vision applications in construction safety assurance DOI

Weili Fang,

Lieyun Ding, Peter E.D. Love

et al.

Automation in Construction, Journal Year: 2019, Volume and Issue: 110, P. 103013 - 103013

Published: Dec. 6, 2019

Language: Английский

FusionGAN: A generative adversarial network for infrared and visible image fusion DOI
Jiayi Ma, Wei Yu, Pengwei Liang

et al.

Information Fusion, Journal Year: 2018, Volume and Issue: 48, P. 11 - 26

Published: Sept. 5, 2018

Language: Английский

Citations

1460

DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion DOI
Jiayi Ma, Han Xu, Junjun Jiang

et al.

IEEE Transactions on Image Processing, Journal Year: 2020, Volume and Issue: 29, P. 4980 - 4995

Published: Jan. 1, 2020

In this paper, we proposed a new end-to-end model, termed as dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Our method establishes an game between generator two discriminators. The aims to generate real-like fused image based on specifically designed content loss fool the discriminators, while discriminators aim distinguish structure differences source images, respectively, in addition loss. Consequently, is forced simultaneously keep thermal radiation texture details image. Moreover, fuse resolutions, e.g., low-resolution high-resolution image, our DDcGAN constrains downsampled have similar property with This can avoid causing information blurring or detail loss, which typically happens traditional methods. addition, also apply multi-modality medical positron emission tomography magnetic resonance qualitative quantitative experiments publicly available datasets demonstrate superiority over state-of-the-art, terms both visual effect metrics. code at https://github.com/jiayi-ma/DDcGAN.

Language: Английский

Citations

852

RFN-Nest: An end-to-end residual fusion network for infrared and visible images DOI
Hui Li, Xiao‐Jun Wu, Josef Kittler

et al.

Information Fusion, Journal Year: 2021, Volume and Issue: 73, P. 72 - 86

Published: March 1, 2021

Language: Английский

Citations

606

Image fusion meets deep learning: A survey and perspective DOI
Hao Zhang, Han Xu, Xin Tian

et al.

Information Fusion, Journal Year: 2021, Volume and Issue: 76, P. 323 - 336

Published: July 6, 2021

Language: Английский

Citations

501

Rethinking the Image Fusion: A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity DOI Open Access
Hao Zhang, Han Xu, Yang Xiao

et al.

Proceedings of the AAAI Conference on Artificial Intelligence, Journal Year: 2020, Volume and Issue: 34(07), P. 12797 - 12804

Published: April 3, 2020

In this paper, we propose a fast unified image fusion network based on proportional maintenance of gradient and intensity (PMGI), which can end-to-end realize variety tasks, including infrared visible fusion, multi-exposure medical multi-focus pan-sharpening. We unify the problem into texture source images. On one hand, is divided path for information extraction. perform feature reuse in same to avoid loss due convolution. At time, introduce pathwise transfer block exchange between different paths, not only pre-fuse information, but also enhance be processed later. other define uniform form function these two kinds adapt tasks. Experiments publicly available datasets demonstrate superiority our PMGI over state-of-the-art terms both visual effect quantitative metric addition, method faster compared with state-of-the-art.

Language: Английский

Citations

421

Artificial intelligence in the creative industries: a review DOI Creative Commons
Nantheera Anantrasirichai, David Bull

Artificial Intelligence Review, Journal Year: 2021, Volume and Issue: 55(1), P. 589 - 656

Published: July 2, 2021

This paper reviews the current state of art in Artificial Intelligence (AI) technologies and applications context creative industries. A brief background AI, specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent (RNNs) Deep Reinforcement (DRL). We categorise into five groups related to how AI are used: i) content creation, ii) information analysis, iii) enhancement post production workflows, iv) extraction enhancement, v) data compression. critically examine successes limitations this rapidly advancing technology each these areas. further differentiate between use as a tool its potential creator own right. foresee that, near future, machine learning-based will be adopted widely or collaborative assistant for creativity. In contrast, we observe that learning domains with fewer constraints, where `creator', remain modest. The (or developers) win awards original creations competition human creatives also limited, based on contemporary technologies. therefore conclude industries, maximum benefit from derived focus centric -- it designed augment, rather than replace,

Language: Английский

Citations

421

GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion DOI
Jiayi Ma, Hao Zhang, Zhenfeng Shao

et al.

IEEE Transactions on Instrumentation and Measurement, Journal Year: 2020, Volume and Issue: 70, P. 1 - 14

Published: Dec. 1, 2020

Visible images contain rich texture information, whereas infrared have significant contrast. It is advantageous to combine these two kinds of information into a single image so that it not only has good contrast but also contains details. In general, previous fusion methods cannot achieve this goal well, where the fused results are inclined either visible or an image. To address challenge, new framework called generative adversarial network with multiclassification constraints (GANMcC) proposed, which transforms multidistribution simultaneous estimation problem fuse and in more reasonable way. We adopt estimate distributions light domains at same time, game discrimination will make result balanced manner, as addition, we design specific content loss constrain generator, introduces idea main auxiliary extraction gradient intensity enable generator extract sufficient from source complementary manner. Extensive experiments demonstrate advantages our GANMcC over state-of-the-art terms both qualitative effect quantitative metric. Moreover, method can even overexposed. Our code publicly available https://github.com/jiayi-ma/GANMcC.

Language: Английский

Citations

396

Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion DOI
Jiayi Ma, Wei Yu, Chen Chen

et al.

Information Fusion, Journal Year: 2020, Volume and Issue: 62, P. 110 - 120

Published: May 1, 2020

Language: Английский

Citations

374

Infrared and visible image fusion via detail preserving adversarial learning DOI
Jiayi Ma, Pengwei Liang, Wei Yu

et al.

Information Fusion, Journal Year: 2019, Volume and Issue: 54, P. 85 - 98

Published: July 22, 2019

Language: Английский

Citations

363

Illumination-aware faster R-CNN for robust multispectral pedestrian detection DOI
Chengyang Li, Dan Song, Ruofeng Tong

et al.

Pattern Recognition, Journal Year: 2018, Volume and Issue: 85, P. 161 - 171

Published: Aug. 13, 2018

Language: Английский

Citations

352