2015
DOI: 10.1109/jsen.2015.2411994
|View full text |Cite
|
Sign up to set email alerts
|

Gesture Recognition using Wearable Vision Sensors to Enhance Visitors' Museum Experiences

Abstract: We introduce a novel approach to cultural heritage experience: by means of ego-vision embedded devices we develop a system, which offers a more natural and entertaining way of accessing museum knowledge. Our method is based on distributed self-gesture and artwork recognition, and does not need fixed cameras nor radio-frequency identifications sensors. We propose the use of dense trajectories sampled around the hand region to perform self-gesture recognition, understanding the way a user naturally interacts wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(33 citation statements)
references
References 32 publications
0
33
0
Order By: Relevance
“…According to survey of [11], the most commonly explored objective of egocentric vision is object recognition and tracking. Furthermore, hands are among the most common objects in the user's field of view, and a proper detection, localization, and tracking could be a main input for other objectives, such as gesture recognition, understanding hand-object interactions, and activity recognition [5,[12][13][14][15][16][17][18][19][20]. Recently, egocentric pixel-level hand detection has attracted more and more attention.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…According to survey of [11], the most commonly explored objective of egocentric vision is object recognition and tracking. Furthermore, hands are among the most common objects in the user's field of view, and a proper detection, localization, and tracking could be a main input for other objectives, such as gesture recognition, understanding hand-object interactions, and activity recognition [5,[12][13][14][15][16][17][18][19][20]. Recently, egocentric pixel-level hand detection has attracted more and more attention.…”
Section: Related Workmentioning
confidence: 99%
“…Zhu et al [2] extend the pixel-level method by introducing shape information of pixels based on structured forests. Baraldi et al [5] utilize temporal and spatial coherence strategy to improve the hand segmentation of the pixel-level method. The state of the arts use video clip EDSH1 as the training data and test their approaches on the rest clips of EDSH2 and EDSHK.…”
Section: Evaluation On Benchmark Datasetmentioning
confidence: 99%
“…Specifically, teleoperation, telemanipulation and telepresence have been benefited from these wearable devices that, coupled to robots composed of wheels, a stand and a camera, provide humans with an enhanced control of a robot located in a remote environment [20], [21]. The improvement of immersion and telepresence experience have also been possible by the rapid progress achieved in robot and sensor technology [22], [23], [24]. Particularly, humanoid robots, which try to mimic the human body structure, movements and sensory capabilities, offer a more natural platform for remote control, exploration and interaction with humans and the surrounding environment [25], [26], [27].…”
Section: Related Workmentioning
confidence: 99%
“…Wearable optical see-through devices have been researched by Dalens et al (2014) and Baraldi et al (2015) who implemented computer vision methods on devices such as Google Glass to recognize paintings in real time, and detect hand gestures which visitors could use to naturally interact with the artwork. These studies however did not test user experience.…”
Section: Related Workmentioning
confidence: 99%