Earth Science Informatics, Год журнала: 2024, Номер unknown
Опубликована: Сен. 18, 2024
Язык: Английский
Earth Science Informatics, Год журнала: 2024, Номер unknown
Опубликована: Сен. 18, 2024
Язык: Английский
Remote Sensing, Год журнала: 2024, Номер 16(16), С. 3021 - 3021
Опубликована: Авг. 17, 2024
Underwater images, as a crucial medium for storing ocean information in underwater sensors, play vital role various tasks. However, they are prone to distortion due the imaging environment, which leads decline visual quality, is an urgent issue marine vision systems address. Therefore, it necessary develop image enhancement (UIE) and corresponding quality assessment methods. At present, most (UIQA) methods primarily rely on extracting handcrafted features that characterize degradation attributes, struggle measure complex mixed distortions often exhibit discrepancies with human perception practical applications. Furthermore, current UIQA lack consideration of perspective enhanced effects. To this end, paper employs luminance saliency priors critical first time effect global local achieved by UIE algorithms, named JLSAU. The proposed JLSAU built upon overall pyramid-structured backbone, supplemented Luminance Feature Extraction Module (LFEM) Saliency Weight Learning (SWLM), aim at obtaining multiple scales. supplement aims perceive visually sensitive luminance, including histogram statistical grayscale positional information. reflects variation both spatial channel domains. Finally, effectively model relationship among different levels contained multi-scale features, Attention Fusion (AFFM) proposed. Experimental results public UIQE UWIQA datasets demonstrate outperforms existing state-of-the-art
Язык: Английский
Процитировано
3Neural Networks, Год журнала: 2024, Номер 181, С. 106809 - 106809
Опубликована: Окт. 18, 2024
Язык: Английский
Процитировано
3Pattern Recognition, Год журнала: 2024, Номер unknown, С. 111198 - 111198
Опубликована: Ноя. 1, 2024
Язык: Английский
Процитировано
3International Journal of Computers and Applications, Год журнала: 2025, Номер unknown, С. 1 - 17
Опубликована: Фев. 20, 2025
Язык: Английский
Процитировано
0The Visual Computer, Год журнала: 2025, Номер unknown
Опубликована: Март 19, 2025
Язык: Английский
Процитировано
0Measurement, Год журнала: 2025, Номер unknown, С. 117329 - 117329
Опубликована: Март 1, 2025
Язык: Английский
Процитировано
0Sensors, Год журнала: 2025, Номер 25(9), С. 2820 - 2820
Опубликована: Апрель 30, 2025
In this paper, we investigate implicit surface reconstruction methods based on deep learning, enhanced by multi-sensor data fusion, to improve the accuracy of 3D in complex scenes. Existing single-sensor approaches often struggle with occlusions and incomplete observations. By fusing complementary information from multiple sensors (e.g., cameras or a combination depth sensors), our proposed framework alleviates issue missing partial further increases fidelity. We introduce novel neural network that learns continuous signed distance function (SDF) for scene geometry, conditioned fused feature representations. The seamlessly merges multi-modal into unified representation, enabling precise watertight reconstruction. conduct extensive experiments datasets, demonstrating superior compared baselines classical fusion methods. Quantitative qualitative results reveal significantly improves completeness geometric detail, while approach provides smooth, high-resolution surfaces. Additionally, analyze influence number diversity quality, model's ability generalize unseen data, computational considerations. Our work highlights potential coupling representations achieve robust challenging real-world conditions.
Язык: Английский
Процитировано
0Multimedia Systems, Год журнала: 2025, Номер 31(3)
Опубликована: Май 5, 2025
Процитировано
0International Journal of Applied Earth Observation and Geoinformation, Год журнала: 2025, Номер 140, С. 104553 - 104553
Опубликована: Май 9, 2025
Язык: Английский
Процитировано
0Signal Image and Video Processing, Год журнала: 2025, Номер 19(8)
Опубликована: Май 28, 2025
Язык: Английский
Процитировано
0