2006
DOI: 10.1109/tip.2006.877496
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Stochastic Image Grammars for Classification and Segmentation

Abstract: We develop a new class of hierarchical stochastic image models called spatial random trees (SRTs) which admit polynomial-complexity exact inference algorithms. Our framework of multitree dictionaries is the starting point for this construction. SRTs are stochastic hidden tree models whose leaves are associated with image data. The states at the tree nodes are random variables, and, in addition, the structure of the tree is random and is generated by a probabilistic grammar. We describe an efficient recursive a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
35
0
6

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(43 citation statements)
references
References 41 publications
1
35
0
6
Order By: Relevance
“…The agglomeration of smaller parts into a composition in a hierarchical model also reduces the complexity for any search to be conducted on them. Similar compositional models for recognition can also be found in [8,4]. Although the compositional tree structure described in these models are closely tied to some grammar, none of [8,4] employ logic programs for inference.…”
Section: Introductionmentioning
confidence: 82%
“…The agglomeration of smaller parts into a composition in a hierarchical model also reduces the complexity for any search to be conducted on them. Similar compositional models for recognition can also be found in [8,4]. Although the compositional tree structure described in these models are closely tied to some grammar, none of [8,4] employ logic programs for inference.…”
Section: Introductionmentioning
confidence: 82%
“…For example, the pictorial structures [10], [11] and constellation models [12] are planar graphs with a user-specified number of "parts," configured in a prespecified model structure. Hierarchical models are typically derived by hierarchical clustering of features [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28]. This hierarchical clustering can be performed with respect to a statistical dependence that exists among subsets of features or simply the spatial containment relationships between a large feature cluster (for example, large region) and its constituent subclusters (for example, embedded subregions).…”
Section: Relationship To Prior Workmentioning
confidence: 99%
“…Finally, object recognition, in stage four, is typically evaluated only through image classification in terms of whether the learned object class/category is present or absent [12], [27], [38], [42], [43], [44]. There are also approaches that attempt object localization by placing a bounding box around a detected object or by thresholding a probabilistic map that a pixel belongs to the object given the detected features [37], [40], [41].…”
Section: Relationship To Prior Workmentioning
confidence: 99%
“…While the perceptual organization community sought to inject mid-level, object-independent knowledge into the process, there has been a recent tendency to bypass mid-level knowledge and inject object-dependent knowledge into the process, e.g., [39,252,140,270,234,151,163,243,266,260,38,111] (see Figure 12). Cast as a knowledge-based (or top-down) segmentation problem, it's important to note that this bears a close resemblance to classical target (or model-based) recognition, in which an individual object model (whether exemplar or category) is used to constrain image segmentation.…”
Section: The Abstraction Of Structurementioning
confidence: 99%