2019
DOI: 10.3390/s19112479
|View full text |Cite
|
Sign up to set email alerts
|

A New Deep Learning Algorithm for SAR Scene Classification Based on Spatial Statistical Modeling and Features Re-Calibration

Abstract: Synthetic Aperture Radar (SAR) scene classification is challenging but widely applied, in which deep learning can play a pivotal role because of its hierarchical feature learning ability. In the paper, we propose a new scene classification framework, named Feature Recalibration Network with Multi-scale Spatial Features (FRN-MSF), to achieve high accuracy in SAR-based scene classification. First, a Multi-Scale Omnidirectional Gaussian Derivative Filter (MSOGDF) is constructed. Then, Multi-scale Spatial Features… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 52 publications
0
22
0
Order By: Relevance
“…The attention module can fuse the input global information, and is widely used in the field of image vision. Chen et al [32] proposed a feature recalibration network with multi-level spatial features (FRN-MSF) to implement scene classification for 11 types of scenes from SAR images, which incurred the SENet and achieved a satisfactory classification result. Fu et al [33] extended two types of attention modules based on the self-attention module, and constructed the position attention module (PAM) and channel attention module (CAM), which work in parallel to capture the global information of the image in the spatial and channel dimensions to obtain rich contextual information.…”
Section: Dual-attention Mechanismmentioning
confidence: 99%
“…The attention module can fuse the input global information, and is widely used in the field of image vision. Chen et al [32] proposed a feature recalibration network with multi-level spatial features (FRN-MSF) to implement scene classification for 11 types of scenes from SAR images, which incurred the SENet and achieved a satisfactory classification result. Fu et al [33] extended two types of attention modules based on the self-attention module, and constructed the position attention module (PAM) and channel attention module (CAM), which work in parallel to capture the global information of the image in the spatial and channel dimensions to obtain rich contextual information.…”
Section: Dual-attention Mechanismmentioning
confidence: 99%
“…The Airport detection has a wide range of applications in navigation, accident detection, rescue, and aircraft positioning [4]. Existing research of airport detection is mostly based on optical remote sensing images [5] [6]. The traditional method of extracting the airport edge line segment to perform the airport detection is the most commonly used [5]- [8], but this method assumes that linear features could be easily extracted from all airports.…”
Section: Iistate Of the Artmentioning
confidence: 99%
“…Existing research of airport detection is mostly based on optical remote sensing images [5] [6]. The traditional method of extracting the airport edge line segment to perform the airport detection is the most commonly used [5]- [8], but this method assumes that linear features could be easily extracted from all airports. This is very challenging for those airports with numerous terminals and irregular buildings.…”
Section: Iistate Of the Artmentioning
confidence: 99%
“…Scenery images comprise a wide variety of knowledge about the behavior of various objects which have visible features such as borders, corners, and point clouds and these enable us to learn, modify, consider alternative solutions and create new techniques to examine complex scenes. Scene interpretation [ 1 , 2 ] should be capable of accommodating changes in the environment being observed, identifying the vital characteristics of various objects and defining relationships among various objects in order to represent the actual scene behaviors [ 3 , 4 ].…”
Section: Introductionmentioning
confidence: 99%