Transformer-based Semantic Segmentation for Large-Scale Building Footprint Extraction from Very-High Resolution Satellite Images

Loading...
Thumbnail Image

Authors

Gibril, Mohamed Barakat A.
Al-Ruzouq, Rami
Shanableh, Abdallah
Jena, Ratiranjan
Bolcek, Jan
Zulhaidi Mohd Shafri, Helmi
Ghorbanzadeh, Omid

Advisor

Referee

Mark

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier
Altmetrics

Abstract

Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes.
Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes.

Description

Citation

ADVANCES IN SPACE RESEARCH. 2024, vol. 73, issue 10, p. 4937 -4954.
https://www.sciencedirect.com/science/article/pii/S0273117724002205

Document type

Peer-reviewed

Document version

Published version

Date of access to the full text

Language of document

en

Study field

Comittee

Date of acceptance

Defence

Result of defence

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Creative Commons Attribution 4.0 International
Citace PRO