2017
DOI: 10.1109/lgrs.2017.2727515
|View full text |Cite
|
Sign up to set email alerts
|

Fully Convolutional Network With Task Partitioning for Inshore Ship Detection in Optical Remote Sensing Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
58
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 120 publications
(59 citation statements)
references
References 18 publications
0
58
0
1
Order By: Relevance
“…More recently, with the idea of learning theory, Refs. [21][22][23][24][25] proposed to learn high-level features automatically for classification tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…More recently, with the idea of learning theory, Refs. [21][22][23][24][25] proposed to learn high-level features automatically for classification tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Here, the semi-supervised method is used for combining the labeled and unlabeled pixels for a better classification result in the light network. With excellent classification performance, the method proposed for the imbalanced PolSAR images is used as shown in our preliminary work [24]. The cost-sensitive latent space learning network is built based on the parametric feature and classifier learning framework for imbalanced PolSAR images, where classifier and latent space learning are defined as optimizing the posterior and likelihood function for labeled pixels, respectively.…”
Section: Completion Of Label Matrix By Matrix Completionmentioning
confidence: 99%
“…FCN has greatly increased the processing flexibility and computational efficiency, and the image-to-image mapping process is naturally suitable for the pixel-wise image labeling tasks. In remote sensing image interpretation fields, such tasks include land structure segmentation, sea-land segmentation and others [21][22][23]. However, raft labeling is different from the above labeling problems, where there is a huge difference between the semantic scales between them.…”
Section: Introductionmentioning
confidence: 99%
“…Similar to the related works [8]- [12], the RSIs used in this paper were collected from Google Earth with the spatial resolution ranging from 0.5m to 2m. Compared with the objects in nature scene images, the objects in RSIs (such as airports, buildings, and ships) usually have many different orientations, scales, and types since the RSIs are taken overhead [13], [14].…”
mentioning
confidence: 99%