A novel generative adversarial network framework for super-resolution reconstruction of remote sensing DOI Creative Commons
Ruilin Li, Luliang Wen, Songtao Shao

et al.

Frontiers in Earth Science, Journal Year: 2025, Volume and Issue: 13

Published: May 8, 2025

Introduction Remote sensing super-resolution (RS-SR) plays a crucial role in the analysis of remote images, aiming to improve spatial resolution images with lower resolutions. Recent advancements RS-SR research have been largely driven by integration deep learning techniques, especially through application Generative Adversarial Networks (GANs), which shown significant effectiveness advancing this field. While GAN has achieved notable field, its tendency toward pattern collapse often introduces artifacts and distorts textures reconstructed images. Methods This study novel model, termed Diffusion Enhanced Network (DEGAN), designed quality incorporation diffusion model. At heart DEGAN lies an innovative architecture that fuses adversarial mechanisms both generator discriminator integrated module. additional component utilizes noise reduction capabilities process refine intermediate stages image generation, ultimately improving clarity final output enhancing performance super-resolution. Results In test dataset, peak signal-to-noise ratio (PSNR) increased 0.345 dB at 2× scaling 0.671 4× scaling, while structural similarity index (SSIM) was improved 0.0087 0.0166, respectively, compared current state-of-the-art (SOTA) approach. Discussion These results indicate significantly improves reconstruction The introduction module attention mechanism effectively reduces enhances clarity, addressing common issues texture distortion reconstruction.

Language: Английский

Advanced Insect Detection Network for UAV-Based Biodiversity Monitoring DOI Creative Commons
Halimjon Khujamatov, Shakhnoza Muksimova,

Mirjamol Abdullaev

et al.

Remote Sensing, Journal Year: 2025, Volume and Issue: 17(6), P. 962 - 962

Published: March 9, 2025

The Advanced Insect Detection Network (AIDN), which represents a significant advancement in the application of deep learning for ecological monitoring, is specifically designed to enhance accuracy and efficiency insect detection from unmanned aerial vehicle (UAV) imagery. Utilizing novel architecture that incorporates advanced activation normalization techniques, multi-scale feature fusion, custom-tailored loss function, AIDN addresses unique challenges posed by small size, high mobility, diverse backgrounds insects images. In comprehensive testing against established models, demonstrated superior performance, achieving 92% precision, 88% recall, an F1-score 90%, mean Average Precision (mAP) score 89%. These results signify substantial improvement over traditional models such as YOLO v4, SSD, Faster R-CNN, typically show performance metrics approximately 10–15% lower across similar tests. practical implications AIDNs are profound, offering benefits agricultural management biodiversity conservation. By automating classification processes, reduces labor-intensive tasks manual enabling more frequent accurate data collection. This collection quality frequency enhances decision making pest conservation, leading effective interventions strategies. AIDN’s design capabilities set new standard field, promising scalable solutions UAV-based monitoring. Its ongoing development expected integrate additional sensory real-time adaptive further applicability, ensuring its role transformative tool monitoring environmental science.

Language: Английский

Citations

0

Lightweight Evolving U-Net for Next-Generation Biomedical Imaging DOI Creative Commons
Furkat Safarov,

Ugiloy Khojamuratova,

Misirov Komoliddin

et al.

Diagnostics, Journal Year: 2025, Volume and Issue: 15(9), P. 1120 - 1120

Published: April 28, 2025

Background/Objectives: Accurate and efficient segmentation of cell nuclei in biomedical images is critical for a wide range clinical research applications, including cancer diagnostics, histopathological analysis, therapeutic monitoring. Although U-Net its variants have achieved notable success medical image segmentation, challenges persist balancing accuracy with computational efficiency, especially when dealing large-scale datasets resource-limited settings. This study aims to develop lightweight scalable U-Net-based architecture that enhances performance while substantially reducing overhead. Methods: We propose novel evolving integrates multi-scale feature extraction, depthwise separable convolutions, residual connections, attention mechanisms improve robustness across diverse imaging conditions. Additionally, we incorporate channel reduction expansion strategies inspired by ShuffleNet minimize model parameters without sacrificing precision. The was extensively validated using the 2018 Data Science Bowl dataset. Results: Experimental evaluation demonstrates proposed achieves Dice Similarity Coefficient (DSC) 0.95 an 0.94, surpassing state-of-the-art benchmarks. effectively delineates complex overlapping structures high fidelity, maintaining efficiency suitable real-time applications. Conclusions: variant offers adaptable solution tasks. Its strong both highlights potential deployment diagnostics biological research, paving way resource-conscious solutions.

Language: Английский

Citations

0

A novel generative adversarial network framework for super-resolution reconstruction of remote sensing DOI Creative Commons
Ruilin Li, Luliang Wen, Songtao Shao

et al.

Frontiers in Earth Science, Journal Year: 2025, Volume and Issue: 13

Published: May 8, 2025

Introduction Remote sensing super-resolution (RS-SR) plays a crucial role in the analysis of remote images, aiming to improve spatial resolution images with lower resolutions. Recent advancements RS-SR research have been largely driven by integration deep learning techniques, especially through application Generative Adversarial Networks (GANs), which shown significant effectiveness advancing this field. While GAN has achieved notable field, its tendency toward pattern collapse often introduces artifacts and distorts textures reconstructed images. Methods This study novel model, termed Diffusion Enhanced Network (DEGAN), designed quality incorporation diffusion model. At heart DEGAN lies an innovative architecture that fuses adversarial mechanisms both generator discriminator integrated module. additional component utilizes noise reduction capabilities process refine intermediate stages image generation, ultimately improving clarity final output enhancing performance super-resolution. Results In test dataset, peak signal-to-noise ratio (PSNR) increased 0.345 dB at 2× scaling 0.671 4× scaling, while structural similarity index (SSIM) was improved 0.0087 0.0166, respectively, compared current state-of-the-art (SOTA) approach. Discussion These results indicate significantly improves reconstruction The introduction module attention mechanism effectively reduces enhances clarity, addressing common issues texture distortion reconstruction.

Language: Английский

Citations

0