2014
DOI: 10.1109/tpami.2014.2315814
|View full text |Cite
|
Sign up to set email alerts
|

Structured Labels in Random Forests for Semantic Labelling and Object Detection

Abstract: Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic imag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
31
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 44 publications
(32 citation statements)
references
References 60 publications
1
31
0
Order By: Relevance
“…In this section we give an overview of Decision Trees and the Structured Random Forest (Kontschieder et al, 2011a, Kontschieder et al, 2014. The Structured Random Forest is employed to classify each pixel by using the region around it considering its local structure.…”
Section: Structured Random Forestmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section we give an overview of Decision Trees and the Structured Random Forest (Kontschieder et al, 2011a, Kontschieder et al, 2014. The Structured Random Forest is employed to classify each pixel by using the region around it considering its local structure.…”
Section: Structured Random Forestmentioning
confidence: 99%
“…To overcome this drawback, various methods have been proposed for the test and training function selection, such as Principal Component Analysis (PCA) and probabilistic approaches (Kontschieder et al, 2014). In our approach the output of a leaf node is computed as a joint probability distribution of the labels assigned to the leaf node.…”
Section: Structured Random Forestmentioning
confidence: 99%
See 1 more Smart Citation
“…In our case the segmentation task is a pixelwise binary problem and, hence, the evaluation is based on the predicted segmentation mask and the ground truth mask. Based on these masks we compute the Jaccard index [16], also often termed region based intersection over union (IU ), for which we compute the average over classes (mIU ) as in [17]- [20], the pixel accuracy (PA) [20], [21], the dice similarity coefficient (DSC) [14], the hit rate (HR) [14], [20] and the false acceptance rate (F AR) [14].…”
Section: A Evaluation Protocolmentioning
confidence: 99%
“…We train a random forest similar to the randomized tree algorithms in [26], [27]. Each tree tr in a forest F is trained independently on a random subset of the training set D ⊆ X×H according to the saliency guided foreground hypothesis.…”
Section: Forest Model Trainingmentioning
confidence: 99%