A Feature Dynamic Enhancement and Global Collaboration Guidance Network for Remote Sensing Image Compression
Loading...
Date
2025-06
Authors
Fang, Q. Z.
Gu, S. B.
Wang, J. G.
Zhang, L. L.
ORCID
Advisor
Referee
Mark
Journal Title
Journal ISSN
Volume Title
Publisher
Radioengineering Society
Altmetrics
Abstract
Deep learning-based remote sensing image compression methods show great potential, but traditional convolutional networks mainly focus on local feature extraction and show obvious limitations in dynamic feature learning and global context modeling. Remote sensing images contain multiscale local features and global low-frequency information, which are challenging to extract and fuse efficiently. To address this, we propose a Feature Dynamic Enhancement and Global Collaboration Guidance Network (FDEGCNet). First, we propose an Omni-Dimensional Attention Model (ODAM), which dynamically captures the key salient features in the image content by adaptively adjusting the feature extraction strategy to enhance the modelâ s sensitivity to key information. Second, a Hyperprior Efficient Attention Model (HEAM) is designed to combine multi-directional convolution and pooling operations to efficiently capture cross-dimensional contextual information and facilitate the interaction and fusion of multi-scale features. Finally, the Multi-Kernel Convolutional Attention Model (MCAM) integrates global branching to extract frequency domain context and enhance local feature representation through multi-scale convolutions. The experimental results show that FDEGCNet achieves significant improvement and maintains low computational complexity regarding image quality evaluation metrics (PSNR, MSSSIM, LPIPS, and VIFp) compared to the advanced compression models. Code is available at https://github.com/shiboGu12/FDEGCNet
Description
Citation
Radioengineering. 2025 vol. 34, ÄŤ. 2, s. 324-341. ISSN 1210-2512
https://www.radioeng.cz/fulltexts/2025/25_02_0324_0341.pdf
https://www.radioeng.cz/fulltexts/2025/25_02_0324_0341.pdf
Document type
Peer-reviewed
Document version
Published version
Date of access to the full text
Language of document
en