2016
DOI: 10.1007/978-3-319-46454-1_16
|View full text |Cite
|
Sign up to set email alerts
|

Ego2Top: Matching Viewers in Egocentric and Top-View Videos

Abstract: Abstract. Egocentric cameras are becoming increasingly popular and provide us with large amounts of videos, captured from the first person perspective. At the same time, surveillance cameras and drones offer an abundance of visual information, often captured from top-view. Although these two sources of information have been separately studied in the past, they have not been collectively studied and related. Having a set of egocentric cameras and a top-view camera capturing the same area, we propose a framework… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
57
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(57 citation statements)
references
References 31 publications
0
57
0
Order By: Relevance
“…We also provide the visualization results of the generated uncertainty maps (Sec. 8) and the arbitrary cross-view image translation experiments on Ego2Top dataset [1] (Sec. 9).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We also provide the visualization results of the generated uncertainty maps (Sec. 8) and the arbitrary cross-view image translation experiments on Ego2Top dataset [1] (Sec. 9).…”
Section: Resultsmentioning
confidence: 99%
“…Dayton [42] and CVUSA [44]. Meanwhile, we also create a largerscale cross-view synthesis benchmark using the data from Ego2Top [1], and present results of multiple baseline models for the research community.…”
Section: Introductionmentioning
confidence: 99%
“…An egocentric camera view is often the most natural perspective for observing an ego-vehicle environment, but it introduces additional challenges due to its narrow field of view. The literature in egocentric visual perception has typically focused on activity recognition [5], [6], [11]- [13], object detection [14]- [16], person identification [17]- [19], video summarization [20], and gaze anticipation [21]. Recently, papers have also applied egocentric vision to ego-action estimation and prediction.…”
Section: Related Workmentioning
confidence: 99%
“…First-person Cameras. Ardeshir and Borji [4] match a set of first-person videos to a set of people appearing in a top-view video using graph matching, but assume there are multiple first-person cameras sharing the same field of view at any time and only consider third-person cameras that are overhead. Fan et al [14] identify a first-person camera wearer in a third-person video using a two-stream semi-Siamese network that incorporates spatial and temporal information from both views, and learns a joint embedding space from first-and third-person matches.…”
Section: Related Workmentioning
confidence: 99%
“…While person tracking and (re-)identification are well-studied in computer vision [37,44], only recently have they been considered in challenging scenarios of heterogeneous first-person and traditional cameras. Ardeshir and Borji [4] consider the case of several people moving around while wearing cameras, and try to match each of these first-person views to one of the people appearing in a third-person, overhead view of the scene. This is challenging because the camera wearer is never seen in their own wearable video, so he or she must be identified by matching their motion from a third-person perspective with the first-person visual changes that are induced by their movements.…”
Section: Introductionmentioning
confidence: 99%