Fusion of Aerial and Satellite Images for Automatic Extraction of Building Footprint Information Using Deep Neural Networks DOI Creative Commons
Ehsan Haghighi Gashti,

Hanieh Bahiraei,

Mohammad Javad Valadan Zoej

и другие.

Information, Год журнала: 2025, Номер 16(5), С. 380 - 380

Опубликована: Май 2, 2025

The analysis of aerial and satellite images for building footprint detection is one the major challenges in photogrammetry remote sensing. This information useful various applications, such as urban planning, disaster monitoring, 3D city modeling. However, it has become a significant challenge due to diverse characteristics buildings, shape, size, shadow interference. study investigated simultaneous use improve accuracy deep learning models detection. For this purpose, with spatial resolution 30 cm Sentinel-2 imagery were employed. Several satellite-derived spectral indices extracted from image. Then, U-Net combined ResNet-18 ResNet-34 trained on these data. results showed that combination model ResNet-34, dataset obtained by integrating indices, referred RGB–Sentinel–ResNet34, achieved best performance among evaluated models. attained an 96.99%, F1-score 90.57%, Intersection over Union 73.86%. Compared other models, RGB–Sentinel–ResNet34 improvement generalization capability. findings indicated data can substantially enhance

Язык: Английский

Fusion of Aerial and Satellite Images for Automatic Extraction of Building Footprint Information Using Deep Neural Networks DOI Creative Commons
Ehsan Haghighi Gashti,

Hanieh Bahiraei,

Mohammad Javad Valadan Zoej

и другие.

Information, Год журнала: 2025, Номер 16(5), С. 380 - 380

Опубликована: Май 2, 2025

The analysis of aerial and satellite images for building footprint detection is one the major challenges in photogrammetry remote sensing. This information useful various applications, such as urban planning, disaster monitoring, 3D city modeling. However, it has become a significant challenge due to diverse characteristics buildings, shape, size, shadow interference. study investigated simultaneous use improve accuracy deep learning models detection. For this purpose, with spatial resolution 30 cm Sentinel-2 imagery were employed. Several satellite-derived spectral indices extracted from image. Then, U-Net combined ResNet-18 ResNet-34 trained on these data. results showed that combination model ResNet-34, dataset obtained by integrating indices, referred RGB–Sentinel–ResNet34, achieved best performance among evaluated models. attained an 96.99%, F1-score 90.57%, Intersection over Union 73.86%. Compared other models, RGB–Sentinel–ResNet34 improvement generalization capability. findings indicated data can substantially enhance

Язык: Английский

Процитировано

0