2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.200
|View full text |Cite
|
Sign up to set email alerts
|

Dense Semantic Labeling of Very-High-Resolution Aerial Imagery and LiDAR with Fully-Convolutional Neural Networks and Higher-Order CRFs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
75
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 84 publications
(78 citation statements)
references
References 29 publications
3
75
0
Order By: Relevance
“…The best result, in terms of overall accuracy, was 90.3% achieved by DLR 9 [22] and GSN3 [27]. Our best result (UFMG 4) appears in fifth place by yielding 89.4% of overall accuracy, outperforming several methods, such as ADL 3 [49] and RIT L8 [50], that also tried to aggregate multi-context information. However, as can be seen in Table XI and Figure 10a, while the other approaches have a larger number of trainable parameters, our network has only 2 millions, which makes it less pruned to overfitting and, consequently, easier to train, showing that the proposed method really helps in extracting all feasible information of the data even if using limited architectures (in terms of parameters).…”
Section: F Performance Analysismentioning
confidence: 79%
See 1 more Smart Citation
“…The best result, in terms of overall accuracy, was 90.3% achieved by DLR 9 [22] and GSN3 [27]. Our best result (UFMG 4) appears in fifth place by yielding 89.4% of overall accuracy, outperforming several methods, such as ADL 3 [49] and RIT L8 [50], that also tried to aggregate multi-context information. However, as can be seen in Table XI and Figure 10a, while the other approaches have a larger number of trainable parameters, our network has only 2 millions, which makes it less pruned to overfitting and, consequently, easier to train, showing that the proposed method really helps in extracting all feasible information of the data even if using limited architectures (in terms of parameters).…”
Section: F Performance Analysismentioning
confidence: 79%
“…The proposed work achieved competitive results, appearing in third place according to the overall accuracy. DST 5 [20] and RIT L7 [50] are the best result in terms of overall accuracy. However, they have a larger number of trainable parameters when compared to our proposed networks, as seen in Figure 10b.…”
Section: F Performance Analysismentioning
confidence: 99%
“…Saito and Aoki [46] used CNN based approaches for building and road extraction. Liu et al [34] used FCN-8 segmentation network analyzing IR, R and G data with 5 convolutional layers and augmentation with a model based on nDSM (normalized Digital Surface Model) and NDVI. Inria competition solutions described in [29] used U-Net or SegNet approaches to segmentation.…”
Section: Building Detectionmentioning
confidence: 99%
“…Data fusion is a fields where UGMs could have a major impact. They have been successfully used to fuse multi-modal images for land cover classification [48,84]. However, more interesting applications would arise from the fusion of ground level data, such as digital maps (e.g., OpenStreetMaps) and geotagged data (e.g., photos, online reviews), with the remotely sensed images.…”
Section: Discussionmentioning
confidence: 99%