A visible-infrared clothes-changing dataset for person re-identification in natural scene DOI

Xianbin Wei,

Kechen Song, Wenkang Yang

и другие.

Neurocomputing, Год журнала: 2023, Номер 569, С. 127110 - 127110

Опубликована: Дек. 12, 2023

Язык: Английский

Mirror complementary transformer network for RGB‐thermal salient object detection DOI Creative Commons
Xiurong Jiang, Yifan Hou, Hui Tian

и другие.

IET Computer Vision, Год журнала: 2023, Номер 18(1), С. 15 - 32

Опубликована: Июнь 28, 2023

Abstract Conventional RGB‐T salient object detection treats RGB and thermal modalities equally to locate the common regions. However, authors observed that rich colour texture information of modality makes objects more prominent compared background; records temperature difference scene, so usually contain clear continuous edge information. In this work, a novel mirror‐complementary Transformer network (MCNet) is proposed for SOD, which supervise two separately with complementary set saliency labels under symmetrical structure. Moreover, attention‐based feature interaction serial multiscale dilated convolution (SDC)‐based fusion modules are introduced make complement adjust each other flexibly. When one fails, model can still accurately segment To demonstrate robustness challenging scenes in real world, build SOD dataset VT723 based on large public semantic segmentation used autonomous driving domain. Extensive experiments benchmark datasets show method outperforms state‐of‐the‐art approaches, including CNN‐based Transformer‐based methods. The code be found at https://github.com/jxr326/SwinMCNet .

Язык: Английский

Процитировано

19

Residual spatial fusion network for RGB-thermal semantic segmentation DOI
Ping Li, Junjie Chen, Binbin Lin

и другие.

Neurocomputing, Год журнала: 2024, Номер 595, С. 127913 - 127913

Опубликована: Май 22, 2024

Язык: Английский

Процитировано

9

Thermal Infrared Target Tracking: A Comprehensive Review DOI
Di Yuan, Haiping Zhang, Xiu Shu

и другие.

IEEE Transactions on Instrumentation and Measurement, Год журнала: 2023, Номер 73, С. 1 - 19

Опубликована: Дек. 1, 2023

Thermal infrared (TIR) target tracking task is not affected by illumination changes and can be tracked at night, on rainy days, foggy other extreme weather; so it widely used in auxiliary driving, unmanned aerial vehicle reconnaissance, video surveillance, scenes. However, the TIR also presents some challenges, such as intensity change, occlusion, deformation, similarity interference, on. These challenges significantly affect performance of methods. To resolve these scenarios, numerous methods have appeared recent years. The purpose this article to give a comprehensive review summary research status We first classify according their frameworks briefly summarize advantages disadvantages different methods, which better understand current progress Next, public datasets/benchmarks for testing are introduced. Subsequently, we demonstrate results several representative more intuitively show made research. Finally, discussed future direction an attempt promote development target-tracking tasks.

Язык: Английский

Процитировано

17

M2FNet: Mask-Guided Multi-Level Fusion for RGB-T Pedestrian Detection DOI
Xiangyang Li, Shiguo Chen, Chunna Tian

и другие.

IEEE Transactions on Multimedia, Год журнала: 2024, Номер 26, С. 8678 - 8690

Опубликована: Янв. 1, 2024

RGB-Thermal pedestrian detection has shown many notable advantages in various lighting and weather conditions by combining the information from RGB-T images. Due to distinct imaging principles, modalities consist of modality-specific modality-consistent information. However, most existing methods indiscriminately integrate these two types information, which leads pollution modality To address this issue, we propose a novel mask-guided multi-level fusion network (M2FNet) for detection. M2FNet independently explores consistent specific features at three different levels, utilizing pixel-level positional masks exclusively focus on pedestrian-related features. Specifically, feature extraction level, selectively embed cross-modality differential compensation (CDC) modules design bidirectional multiscale (BMF) module fully utilize complementary enhance precision predicted masks. At global consistency mining (MGCM) is introduced capture intra-modal inter-modal pedestrians, generates highly discriminative Finally, further reduce differences, decision (MPDF) strategy dynamically weight predictions. Extensive experiments comparisons demonstrate that our proposed M2FNet, with backbones, outperforms state-of-the-art detectors both publicly available KAIST CVC-14 datasets.

Язык: Английский

Процитировано

6

DASR: Dual-Attention Transformer for infrared image super-resolution DOI

ShuBo Liang,

Kechen Song,

Wenli Zhao

и другие.

Infrared Physics & Technology, Год журнала: 2023, Номер 133, С. 104837 - 104837

Опубликована: Июль 30, 2023

Язык: Английский

Процитировано

10

Multi-Scale Aggregation Transformers for Multispectral Object Detection DOI
Shuai You,

Xuedong Xie,

Yujian Feng

и другие.

IEEE Signal Processing Letters, Год журнала: 2023, Номер 30, С. 1172 - 1176

Опубликована: Янв. 1, 2023

Multispectral object detection for autonomous driving is multi-object localization and classification task on visible thermal modalities. In this scenario, modality differences lead to the lack of information in a single misalignment cross-modality information. To alleviate these problems, most existing methods extract based scale ( e.g ., mainly focus detecting significant cars or pedestrians), which leads insufficient performance capturing multi-scale discriminative small bicycles blurred pedestrians) safety hazards process. paper, we propose Multi-Scale Aggregation Network (MSANet) consisting two parts Transformer (MSAT) Cross-modal Merging Fusion Mechanism (CMFM), combined with advantages CNN rich image from modalities by mining both local global context dependencies. Firstly, reduce modality, design novel MSAT module details texture multi-scale. Secondly, feature caused differences, CMFM utilized aggregate complementary multiple levels. Comprehensive experiments benchmarks demonstrate that our approach shows better results than several state-of-the-art methods. The code available at https://github.com/ysh-strive/MSANet .

Язык: Английский

Процитировано

10

Middle fusion and multi-stage, multi-form prompts for robust RGB-T tracking DOI
Qiming Wang, Yongqiang Bai,

Hongxing Song

и другие.

Neurocomputing, Год журнала: 2024, Номер 596, С. 127959 - 127959

Опубликована: Июнь 4, 2024

Язык: Английский

Процитировано

4

Exploring the potential of Siamese network for RGBT object tracking DOI

Feng Liang-liang,

Kechen Song,

Junyi Wang

и другие.

Journal of Visual Communication and Image Representation, Год журнала: 2023, Номер 95, С. 103882 - 103882

Опубликована: Июнь 22, 2023

Язык: Английский

Процитировано

9

A rapid detection and quantification method for levee leakage outlets using drone infrared thermography and semantic segmentation DOI
Renlian Zhou, Monjee K. Almustafa,

Zhiping Wen

и другие.

Engineering Applications of Artificial Intelligence, Год журнала: 2025, Номер 143, С. 110066 - 110066

Опубликована: Янв. 16, 2025

Язык: Английский

Процитировано

0

RGB-Thermal cameras calibration based on Maximum Index Map DOI Creative Commons

Jiahui Wei,

Zhen Zou, Wenjie Lai

и другие.

Computers & Electrical Engineering, Год журнала: 2025, Номер 123, С. 110234 - 110234

Опубликована: Март 14, 2025

Язык: Английский

Процитировано

0