2012
DOI: 10.1109/tip.2011.2170081
|View full text |Cite
|
Sign up to set email alerts
|

Camera Constraint-Free View-Based 3-D Object Retrieval

Abstract: Recently, extensive research efforts have been dedicated to view-based methods for 3-D object retrieval due to the highly discriminative property of multiviews for 3-D object representation. However, most of state-of-the-art approaches highly depend on their own camera array settings for capturing views of 3-D objects. In order to move toward a general framework for 3-D object retrieval without the limitation of camera array restriction, a camera constraint-free view-based (CCFV) 3-D object retrieval algorithm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
83
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 225 publications
(83 citation statements)
references
References 45 publications
0
83
0
Order By: Relevance
“…This kind of decomposition is a key component of many computer vision and graphics tasks. Rather than focusing on predicting human fixation points [6,32] (another major research direction of visual attention modeling), salient region detection methods aim at uniformly highlighting entire salient object regions, thus benefiting a large number of applications, including objectof-interest image segmentation [19], adaptive compression [17], object recognition [44], content aware image editing [51], object level image manipulation [12,15,53], and internet visual media retrieval [10,11,13,29,24,23].…”
Section: Introductionmentioning
confidence: 99%
“…This kind of decomposition is a key component of many computer vision and graphics tasks. Rather than focusing on predicting human fixation points [6,32] (another major research direction of visual attention modeling), salient region detection methods aim at uniformly highlighting entire salient object regions, thus benefiting a large number of applications, including objectof-interest image segmentation [19], adaptive compression [17], object recognition [44], content aware image editing [51], object level image manipulation [12,15,53], and internet visual media retrieval [10,11,13,29,24,23].…”
Section: Introductionmentioning
confidence: 99%
“…Finally, it is interesting to mention that other proposals focus on the problem of retrieving 3D object models (e.g, [31,32,33,34,35]) or videos (e.g., [36]) instead of images. To improve the efficiency and accuracy of view-based 3D object retrieval, in [35] the authors propose to select the most interesting 2D views using a probabilistic Bayesian method (Adaptive Views Clustering), whereas in [33] the authors present an algorithm that minimizes the number of query views required based on information extracted from the query and the users' relevance feedback.…”
Section: Related Workmentioning
confidence: 99%
“…In [34] the query to retrieve 3D models is a set of views, but no camera constraint must be specified (so, any view set captured by any camera array can be used as a query). In [32] the authors present an algorithm to retrieve 3D models by querying-by-sketch based on the alignment of 3D models to 2D sketches.…”
Section: Related Workmentioning
confidence: 99%
“…Most previous studies focus on encoding and retrieving 3D polygon models using polygon models as input queries (Funkhouser et al, 2003;Assfalg et al, 2007;Gao et al, 2011;Akgul et al, 2009;Gao et al, 2012). However, these studies do not consider model retrieval by using point clouds, which is in a great need in the topic of efficient cyber city construction with airborne LiDAR point clouds.…”
Section: Introductionmentioning
confidence: 99%
“…For view-based retrieval, 3D shapes are represented as a set of 2D projections and 3D models are matched using their visual similarities rather than the geometric similarities (Gao et al, 2011;Gao et al, 2012;Chen et al, 2003;Stavropoulos et al, 2010;Papadakisa et al, 2007). Each projection is described by image descriptors.…”
Section: Introductionmentioning
confidence: 99%