Semantic Uncertainty‐Awared for Semantic Segmentation of Remote Sensing Images DOI Creative Commons
Xiangfeng Qiu, Zhilin Zhang, Xin Luo

et al.

IET Image Processing, Journal Year: 2025, Volume and Issue: 19(1)

Published: Jan. 1, 2025

ABSTRACT Remote sensing image segmentation is crucial for applications ranging from urban planning to environmental monitoring. However, traditional approaches struggle with the unique challenges of aerial imagery, including complex boundary delineation and intricate spatial relationships. To address these limitations, we introduce semantic uncertainty‐aware (SUAS) method, an innovative plug‐and‐play solution designed specifically remote analysis. SUAS builds upon rotated multi‐scale interaction network (RMSIN) architecture introduces prompt refinement uncertainty adjustment module (PRUAM). This novel component transforms original textual prompts into descriptions, particularly focusing on ambiguous boundaries prevalent in imagery. By incorporating uncertainty, directly tackles inherent complexities delineation, enabling more refined segmentations. Experimental results demonstrate SUAS's effectiveness, showing improvements over existing methods across multiple metrics. achieves consistent enhancements mean intersection‐over‐union (mIoU) precision at various thresholds, notable performance handling objects irregular boundaries—a persistent challenge imagery The indicate that design, which leverages guide task, contributes improved accuracy

Language: Английский

Asymmetric Training and Symmetric Fusion for Image Denoising in Edge Computing DOI Open Access

Yupeng Zhang,

Xiaofeng Liao

Symmetry, Journal Year: 2025, Volume and Issue: 17(3), P. 424 - 424

Published: March 12, 2025

Effectively handling mixed noise types and varying intensities is crucial for accurate information extraction analysis, particularly in resource-limited edge computing scenarios. Conventional image denoising approaches struggle with unseen distributions, limiting their effectiveness real-world applications such as object detection, classification, change detection. To address these challenges, we introduce a novel framework that integrates asymmetric learning symmetric fusion. It leverages pretrained model trained only on clean images to provide semantic priors, while supervised module learns direct noise-to-clean mappings using paired noisy–clean data. The asymmetry our approach stems from its dual training objectives: encoder extracts priors noise-free data, mappings. symmetry achieved through structured fusion of features, enhancing generalization across diverse including those environments. Extensive evaluations multiple intensities, remote sensing demonstrate the superior robustness approach. Our method achieves state-of-the-art performance both in-distribution out-of-distribution scenarios, significantly quality downstream tasks environmental monitoring disaster response. Future work may explore extending this specialized like hyperspectral imaging nighttime analysis further refining interplay between deep-learning-based restoration.

Language: Английский

Citations

0

Semantic Uncertainty‐Awared for Semantic Segmentation of Remote Sensing Images DOI Creative Commons
Xiangfeng Qiu, Zhilin Zhang, Xin Luo

et al.

IET Image Processing, Journal Year: 2025, Volume and Issue: 19(1)

Published: Jan. 1, 2025

ABSTRACT Remote sensing image segmentation is crucial for applications ranging from urban planning to environmental monitoring. However, traditional approaches struggle with the unique challenges of aerial imagery, including complex boundary delineation and intricate spatial relationships. To address these limitations, we introduce semantic uncertainty‐aware (SUAS) method, an innovative plug‐and‐play solution designed specifically remote analysis. SUAS builds upon rotated multi‐scale interaction network (RMSIN) architecture introduces prompt refinement uncertainty adjustment module (PRUAM). This novel component transforms original textual prompts into descriptions, particularly focusing on ambiguous boundaries prevalent in imagery. By incorporating uncertainty, directly tackles inherent complexities delineation, enabling more refined segmentations. Experimental results demonstrate SUAS's effectiveness, showing improvements over existing methods across multiple metrics. achieves consistent enhancements mean intersection‐over‐union (mIoU) precision at various thresholds, notable performance handling objects irregular boundaries—a persistent challenge imagery The indicate that design, which leverages guide task, contributes improved accuracy

Language: Английский

Citations

0