2021
DOI: 10.1109/tgrs.2020.3023112
|View full text |Cite
|
Sign up to set email alerts
|

Robust Deep Neural Networks for Road Extraction From Remote Sensing Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…Inspired by LNL methods for classification, [61] and [62] appended a probabilistic model to ordinary deep neural networks in order to capture the relationship between noisy labels and their latent true counterparts for road and building extraction. Nevertheless, more common and effective solutions employ robust loss functions such as bootstrapping [63], consistency constraints [11], [64], and loss reweighting with weights of each sample estimated by an attention mechanism [65] or reliability [16].…”
Section: B Lnl For Image Segmentation Tasksmentioning
confidence: 99%
“…Inspired by LNL methods for classification, [61] and [62] appended a probabilistic model to ordinary deep neural networks in order to capture the relationship between noisy labels and their latent true counterparts for road and building extraction. Nevertheless, more common and effective solutions employ robust loss functions such as bootstrapping [63], consistency constraints [11], [64], and loss reweighting with weights of each sample estimated by an attention mechanism [65] or reliability [16].…”
Section: B Lnl For Image Segmentation Tasksmentioning
confidence: 99%
“…Approaches dealing with Data Noise: NSCT [95], HRO [97], PIQE [98], ACL-CNN [99], HS2P [9]. Approaches dealing with Label Noise: tRNSL [102], NTDNE [103], AF2GNN [105], RSSC-ETDL [106], CSHLC [107], I-FPFN-EM [108], FFCTL [109], RS-COCL-NLF [110], RVgg19 [111]. Table II provides more information about datasets, modalities and backbone of each reviewed methods.…”
Section: A Data Noisementioning
confidence: 99%
“…Nevertheless, a clean set of true labels calibrated by the author is required for the label correction learning process, which is costly to apply to other large-scale datasets. A promising research direction is the combination of noise minimization and correction, as explored in recent studies [109]- [111]. Most recent approaches commonly involve modeling uncertainty using probabilistic methods to address label noise [124].…”
Section: B Label Noisementioning
confidence: 99%
“…[21] incorporated noiserobust loss functions into the CNN architecture to reduce the effect of omission and registration errors in OSM. Inspired by [44], [45] and [46] adopted an extra noise adaptation layer to model a correlation between clean and noisy labels. They successfully improved the performance of the baseline model's performance with the supervision of noisy OSM labels.…”
Section: B Noisy Supervisionmentioning
confidence: 99%