2017
DOI: 10.1016/j.pmcj.2017.03.016
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised understanding of location and illumination changes in egocentric videos

Abstract: Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 43 publications
(72 reference statements)
0
5
0
Order By: Relevance
“…The most intuitive method to understand the data complexity is by visualizing its features and the respective classes. This approach is generally restricted to low dimensional data (2D or 3D) or simplified versions of the feature space that are obtained using manifold algorithms such as Principal Components Analysis (PCA), IsoMaps, or Self Organizing Maps [46]. A common approach that is used when working with dimensional data is to determine its complexity by comparing the capabilities of different classification algorithms to capture known patterns.…”
Section: Classification Modelmentioning
confidence: 99%
“…The most intuitive method to understand the data complexity is by visualizing its features and the respective classes. This approach is generally restricted to low dimensional data (2D or 3D) or simplified versions of the feature space that are obtained using manifold algorithms such as Principal Components Analysis (PCA), IsoMaps, or Self Organizing Maps [46]. A common approach that is used when working with dimensional data is to determine its complexity by comparing the capabilities of different classification algorithms to capture known patterns.…”
Section: Classification Modelmentioning
confidence: 99%
“…Understanding of locations, in terms of mapping the surrounding area with image features or semantically labeling the environment, is actively under research in egocentric vision. In [22], a combination of scene illumination and distinct location characteristics is learned in an unsupervised way, in order to enhance the usability of wearable cameras for hand detection. Location recognition is indirectly the task in [23] where a Google Glass application captures images of the user's field of view and retrieves information about the buildings in sight.…”
Section: Objects and Location Classificationmentioning
confidence: 99%
“…Understanding of locations, in terms of mapping the surrounding area or labeling the environment, has been under active research in egocentric vision. In [14], an unsupervised way of combining scene illumination and location characteristics is proposed to enhance the usability of wearable cameras. Location recognition is indirectly the task in [15] where a Google Glass application captures images of the user's field of view and retrieves information about the buildings in sight.…”
Section: Kitchenmentioning
confidence: 99%