Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2005
DOI: 10.1145/1076034.1076158
|View full text |Cite
|
Sign up to set email alerts
|

3D viewpoint-based photo search and information browsing

Abstract: We propose a new photo search method that uses threedimensional (3D) viewpoints as queries. 3D viewpoint-based image retrieval is especially useful for searching collections of archaeological photographs, which contain many different images of the same object. Our method is designed to enable users to retrieve images that contain the same object but show a different view, and to browse groups of images taken from a similar viewpoint. We also propose using 3D scenes to query by example, which means that users d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…More related to our work, Ephstein et al [10] proposed to relate images with their view frustum (viewable scene) and used a scene-centric ranking to generate a hierarchical organization of images. Several additional methods were proposed for organizing images based on camera location, direction, and additional meta-data [11][12][13][14]. Although these approaches are similar to ours in using the camera field-of-view to describe the viewable scene, their main contribution is on image browsing and grouping of similar images together.…”
Section: Related Workmentioning
confidence: 98%
“…More related to our work, Ephstein et al [10] proposed to relate images with their view frustum (viewable scene) and used a scene-centric ranking to generate a hierarchical organization of images. Several additional methods were proposed for organizing images based on camera location, direction, and additional meta-data [11][12][13][14]. Although these approaches are similar to ours in using the camera field-of-view to describe the viewable scene, their main contribution is on image browsing and grouping of similar images together.…”
Section: Related Workmentioning
confidence: 98%
“…PhotoCompas (Naaman et al 2004) clusters images based on time and location. Realityflythrough (McCurdy and Griswold 2005) uses interface ideas similar to ours for exploring video from camcorders instrumented with GPS and tilt sensors, and Kadobayashi and Tanaka (2005) present an interface for retrieving images using proximity to a virtual camera. In Photowalker (Tanaka et al 2002), a user can manually author a walkthrough of a scene by specifying transitions between pairs of images in a collection.…”
Section: Image Browsing Retrieval and Annotationmentioning
confidence: 99%
“…Regarding the spatial referencing of iconographic data, current research lays on manual methods based on the knowledge [Kadobayashi and Tanaka 2005] and on the registration of the position of the cameras [Waldhäusl and Ogleby 1994], on semi-automatic methods based on geometric solutions for the calibration and orientation of the cameras [Tsai 1986], or still on automatic methods based on analysis and image processing. These last methods estimate the position and orientation of the cameras, starting from the automatic extraction of vanishing points [Lee, Jung, and Nevatia 1992] or homologous points identified on the image [Snavely, Seitz, and Szeliski 2006].…”
Section: Previous Workmentioning
confidence: 99%
“…These last methods estimate the position and orientation of the cameras, starting from the automatic extraction of vanishing points [Lee, Jung, and Nevatia 1992] or homologous points identified on the image [Snavely, Seitz, and Szeliski 2006]. Some browsers have been implemented in order to manage 2D/3D datasets laying on these automatic methods: Photocloud [Brivio et al 2011] or Photo Tourism [Snavely, Seitz, and Szeliski 2007] manages photographs collections in space, while 4D Cities [Schindler, Dellaert, and Kang 2007] integrate the temporal dimension by means of a constraint satisfaction method, by ordering chronologically the spatialized photos and by automatically applying textures on 3D-models.…”
Section: Previous Workmentioning
confidence: 99%