Proceedings of the 16th ACM International Conference on Multimedia 2008
DOI: 10.1145/1459359.1459401
|View full text |Cite
|
Sign up to set email alerts
|

Viewable scene modeling for geospatial video search

Abstract: Video sensors are becoming ubiquitous and the volume of captured video material is very large. Therefore, tools for searching video databases are indispensable. Current techniques that extract features purely based on the visual signals of a video are struggling to achieve good results. By considering video related meta-information, more relevant and precisely delimited search results can be obtained. In this study we propose a novel approach for querying videos based on the notion that the geographical locati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 82 publications
(52 citation statements)
references
References 12 publications
0
52
0
Order By: Relevance
“…Xiaotao Liu designed and implemented the automatic video annotation and querying system, named SEVA, with rich sensor information. The SEVA system filtered and refined the query results through the adjacent and location information recorded in video streams (Liu, Corner, & Shenoy, 2005 Kim, Ay, Yu, & Zimmermann, 2010;Ay, Zhang, Kim, He, & Zimmermann, 2009;Ay, Zimmermann, & Kim, 2008Ma, Ay, Zimmermann, & Kim, 2013). The Open Geospatial Consortium (OGC) defined the view cone model for the video frame.…”
Section: Video Retrieval Methods Based On Geographic Informationmentioning
confidence: 99%
“…Xiaotao Liu designed and implemented the automatic video annotation and querying system, named SEVA, with rich sensor information. The SEVA system filtered and refined the query results through the adjacent and location information recorded in video streams (Liu, Corner, & Shenoy, 2005 Kim, Ay, Yu, & Zimmermann, 2010;Ay, Zhang, Kim, He, & Zimmermann, 2009;Ay, Zimmermann, & Kim, 2008Ma, Ay, Zimmermann, & Kim, 2013). The Open Geospatial Consortium (OGC) defined the view cone model for the video frame.…”
Section: Video Retrieval Methods Based On Geographic Informationmentioning
confidence: 99%
“…Depending on the sensors that detected them, the positions can be either geometric (sequence of (x,y,t) triplets for 2D positions) [8] or symbolic (sequence of (rfidtag, t)) [9]. Also depending on the object's type and on the capability of the sensors associated to the object or embedded in the environment, additional data can be associated to the object's movement (e.g., for a mobile camera, it is interesting to capture information like orientation and field of view) [10].…”
Section: Hybrid Trajectory Based Query Definitionmentioning
confidence: 99%
“…We represent videos as a sequence of video frames, and each video frame is modeled as a Filed Of View (FOV) [3] as shown in Fig. 2.…”
Section: Video Spatial Model and Query Definitionsmentioning
confidence: 99%
“…Request permissions from Permissions@acm.org. tial properties at the fine granularity level of frame (e.g., Field-OfView [3]) . FOV model has been proven to be very useful for various media applications such as demonstrated by the online mobile media management system, MediaQ [6].…”
Section: Introductionmentioning
confidence: 99%