Proceedings of the 2004 14th IEEE Signal Processing Society Workshop Machine Learning for Signal Processing, 2004.
DOI: 10.1109/mlsp.2004.1423029
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical ensemble learning for multimedia categorization and autoannotation

Abstract: Abstract. This paper presents a hierarchical ensemble learning method applied in the context of multimedia autoannotation. In contrast to the standard multiple-category classification setting that assumes independent, non-overlapping and exhaustive set of categories, the proposed approach models explicitly the hierarchical relationships among target classes and estimates their relevance to a query as a trade-off between the goodness of fit to a given category description and its inherent uncertainty. The promi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…Classifiers in an ensemble can also be arranged into a hierarchy where the ensemble input is initially classified into a number of general classes before being classified to a more detailed class. This allows members of the ensemble to be trained on specific tasks, rather than the entire classification task [20][21]. Prudent selection of hierarchical classification branches can enhance overall classification performance.…”
Section: Open Accessmentioning
confidence: 99%
“…Classifiers in an ensemble can also be arranged into a hierarchy where the ensemble input is initially classified into a number of general classes before being classified to a more detailed class. This allows members of the ensemble to be trained on specific tasks, rather than the entire classification task [20][21]. Prudent selection of hierarchical classification branches can enhance overall classification performance.…”
Section: Open Accessmentioning
confidence: 99%
“…The work in [21] and [22] builds and improves on a semantic ensemble which uses an ontology such as WordNet for determining a hierarchical representation of the semantic information for automatic annotation. One process specialises in propagating generic semantic information while the other process works to introduce specific knowledge about the document.…”
Section: Unsupervised Learning Approachesmentioning
confidence: 99%
“…An ontology such as WordNet could be used to find parent concepts shared between labels to apply a more general annotation, as was done in [22]. For example, instead of the top-middle image in Figure 6 being labelled "bird tree insect lizard," when it is clearly a leopard, the common parent between "lizard," "bird," and "insect", which is "animal," could be used.…”
Section: Annotation Propagationmentioning
confidence: 99%
“…A number of techniques have been reported which are designed to uncover the latent correlation between low-level visual features and high-level semantics [2,[8][9][10][11][12][13][14]. Typically such approaches involve a training set of pre-annotated images and the identification of visual features in the image such as blobs or salient objects.…”
Section: Automatic Annotation Of Imagesmentioning
confidence: 99%