Published: Oct. 26, 2024
Published: Oct. 26, 2024
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2024, Volume and Issue: abs/2004.10934, P. 25912 - 25921
Published: June 16, 2024
Citations
19Information Fusion, Journal Year: 2024, Volume and Issue: 108, P. 102352 - 102352
Published: March 11, 2024
Language: Английский
Citations
18IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal Year: 2025, Volume and Issue: 47(3), P. 2071 - 2088
Published: Feb. 5, 2025
Multi-source image fusion combines the information coming from multiple images into one data, thus improving imaging quality. This topic has aroused great interest in community. How to integrate different sources is still a big challenge, although existing self-attention based transformer methods can capture spatial and channel similarities. In this paper, we first discuss mathematical concepts behind proposed generalized mechanism, where self-attentions are considered basic forms. The mechanism employs multilinear algebra drive development of novel fully-connected (FCSA) method fully exploit local non-local domain-specific correlations among multi-source images. Moreover, propose representation embedding it FCSA framework as prior within an optimization problem. Some problems unfolded network (FC-Former). More specifically, concept promote potential self-attention. Hence, FC-Former be viewed model unifying tasks. Compared with state-of-the-art methods, exhibits robust superior performance, showing its capability faithfully preserving information.
Language: Английский
Citations
3Information Fusion, Journal Year: 2025, Volume and Issue: 118, P. 102931 - 102931
Published: Jan. 8, 2025
Language: Английский
Citations
2Visual Intelligence, Journal Year: 2024, Volume and Issue: 2(1)
Published: Dec. 31, 2024
Abstract Multimodal image fusion aims to integrate information from different imaging techniques produce a comprehensive, detail-rich single for downstream vision tasks. Existing methods based on local convolutional neural networks (CNNs) struggle capture global features efficiently, while Transformer-based models are computationally expensive, although they excel at modeling. Mamba addresses these limitations by leveraging selective structured state space (S4) effectively handle long-range dependencies maintaining linear complexity. In this paper, we propose FusionMamba, novel dynamic feature enhancement framework that overcome the challenges faced CNNs and Vision Transformers (ViTs) in computer The improves visual state-space model integrating convolution channel attention mechanisms, which not only retains its powerful modeling capability, but also greatly reduces redundancy enhances expressiveness of features. addition, have developed new module called (DFFM). It combines (DFEM) texture disparity perception with cross-modal (CMFM), focuses enhancing inter-modal correlation suppressing redundant information. Experiments show FusionMamba achieves state-of-the-art performance variety multimodal tasks as well experiments, demonstrating broad applicability superiority.
Language: Английский
Citations
14Agronomy, Journal Year: 2024, Volume and Issue: 14(2), P. 341 - 341
Published: Feb. 7, 2024
Nanotechnology, nanosensors in particular, has increasingly drawn researchers’ attention recent years since it been shown to be a powerful tool for several fields like mining, robotics, medicine and agriculture amongst others. Challenges ahead, such as food availability, climate change sustainability, have promoted pushed forward the use of agroindustry environmental applications. However, issues with noise confounding signals make these tools non-trivial technical challenge. Great advances artificial intelligence, more particularly machine learning, provided new that allowed researchers improve quality functionality nanosensor systems. This short review presents latest work analysis data from using learning agroenvironmental It consists an introduction topics application field nanosensors. The rest paper examples techniques utilisation electrochemical, luminescent, SERS colourimetric classes. final section discussion conclusion concerning relevance material discussed future sector.
Language: Английский
Citations
13IEEE Transactions on Instrumentation and Measurement, Journal Year: 2024, Volume and Issue: 73, P. 1 - 15
Published: Jan. 1, 2024
Infrared images can provide prominent targets based on the radiation difference, making them suitable for use in all day and night conditions. On other hand, visible offer texture details with high spatial resolution. image fusion is promising to achieve best of both. Conventional frequency or multi-scale transformation methods are good at preserving details. Deep learning-based become more popular because they preserve high-level semantic features. To tackle challenge extracting fusing cross-modality cross-domain information, we propose a Spatial-Frequency Collaborative Fusion (SFCFusion) framework that effectively fuses information feature space. In domain, source decomposed into base detail layers existing decomposition methods. kernel-based saliency generation module designed region-level structural information. A deep encoder employed extract features from images, maps. shared space, spatial-frequency collaborative through our proposed adaptive scheme. We have conducted experiments compare SFCFusion both conventional learning approaches TNO, LLVIP M 3 FD datasets. The qualitative quantitative evaluation results demonstrate effectiveness SFCFusion. further demonstrated superiority downstream detection task. Our code will be available https://github.com/ChenHanrui430/SFCFusion.
Language: Английский
Citations
11Remote Sensing, Journal Year: 2024, Volume and Issue: 16(20), P. 3804 - 3804
Published: Oct. 13, 2024
The fusion of infrared and visible images together can fully leverage the respective advantages each, providing a more comprehensive richer set information. This is applicable in various fields such as military surveillance, night navigation, environmental monitoring, etc. In this paper, novel image method based on sparse representation guided filtering Laplacian pyramid (LP) domain introduced. source are decomposed into low- high-frequency bands by LP, respectively. Sparse has achieved significant effectiveness fusion, it used to process low-frequency band; excellent edge-preserving effects effectively maintain spatial continuity band. Therefore, combined with weighted sum eight-neighborhood-based modified (WSEML) bands. Finally, inverse LP transform reconstruct fused image. We conducted simulation experiments publicly available TNO dataset validate superiority our proposed algorithm fusing images. Our preserves both thermal radiation characteristics detailed features
Language: Английский
Citations
10IEEE Transactions on Intelligent Transportation Systems, Journal Year: 2024, Volume and Issue: 25(11), P. 17794 - 17809
Published: July 19, 2024
Language: Английский
Citations
9Advanced Functional Materials, Journal Year: 2025, Volume and Issue: unknown
Published: Feb. 10, 2025
Abstract Recently, lead‐free double perovskites have gained much attention for their superior optoelectronic properties. However, realizing efficient white light and near‐infrared (NIR) emission, as well bright radioluminescence (RL) in a single compound has not yet been reported. Herein, Bi 3+ /Mo 4+ ‐codoped Cs 2 Ag 0.6 Na 0.4 InCl 6 perovskite is synthesized by hydrothermal reaction. Under photoexcitation, the codoped exhibits highly broadband warm‐white emission (610 nm) NIR (930 nm), which can be attributed to self‐trapped exciton of d–d transition Mo , respectively. Interestingly, also emits strong with yield 25100 photons per MeV under X‐ray irradiation. Based on /PDMS flexible film, its applications emitting diodes (WLED), source, scintillator are demonstrated, respectively, all exhibit remarkable Finally, as‐fabricated devices light, NIR, imaging further demonstrated pixel‐level image fusion realized without pixel mismatch complex processing. Therefore, internal information capsule wrapped centrifuge tube iron wire successfully through fusion.
Language: Английский
Citations
1