2020
DOI: 10.1109/access.2020.3031485
|View full text |Cite
|
Sign up to set email alerts
|

VVS: Fast Similarity Measuring of FoV-Tagged Videos

Abstract: In this article, we propose a novel method that can measure the similarity of FoV-tagged videos in two dimensions. Recently many researchers have focused on measuring the similarity of FoVtagged videos. The similarity measurement of FoV-tagged videos plays an important role in various societal applications, including urban road networks, traffic, and geographic information systems. Our preliminary work introduced the Largest Common View Subsequences (LCVS) algorithm for computing the similarity of FoV-tagged v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 41 publications
0
0
0
Order By: Relevance
“…In a previous study, the authors collected eye-tracking data from 1003 images of 15 viewers, recorded each observer's look tracking path and look position, and used that as ground truth data for training a saliency model using machine learning. The term geo-tagged video refers to a video obtained using a camera device with an embedded GPS sensor to capture the video's shooting time, location, and direction [4][5][6]. GeoUGV [4] is an extensive geo-tagged video dataset.…”
Section: Introductionmentioning
confidence: 99%
“…In a previous study, the authors collected eye-tracking data from 1003 images of 15 viewers, recorded each observer's look tracking path and look position, and used that as ground truth data for training a saliency model using machine learning. The term geo-tagged video refers to a video obtained using a camera device with an embedded GPS sensor to capture the video's shooting time, location, and direction [4][5][6]. GeoUGV [4] is an extensive geo-tagged video dataset.…”
Section: Introductionmentioning
confidence: 99%