2019
DOI: 10.1016/j.patcog.2018.11.013
|View full text |Cite
|
Sign up to set email alerts
|

Perceptually-guided deep neural networks for ego-action prediction: Object grasping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 42 publications
(40 citation statements)
references
References 59 publications
0
39
0
1
Order By: Relevance
“…In this way, several prototypes were produced and proofs of concepts were developed in order to illustrate potential use cases in various fields in relation with human-driven robotics. As a broadly connectable platform, it allows to investigate hybrid control strategies, combining biomechanical signals with motion- or eye-tracking tools and computer vision techniques (de San Roman et al, 2017; González-Díaz et al, 2019). Reachy can also help study how vision-based control strategies would help driving rehabilitation devices, such as an assistive arm fixed to a wheelchair, for use by patients suffering from Spinal Cord Injury (SCI) (Corbett et al, 2013, 2014).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this way, several prototypes were produced and proofs of concepts were developed in order to illustrate potential use cases in various fields in relation with human-driven robotics. As a broadly connectable platform, it allows to investigate hybrid control strategies, combining biomechanical signals with motion- or eye-tracking tools and computer vision techniques (de San Roman et al, 2017; González-Díaz et al, 2019). Reachy can also help study how vision-based control strategies would help driving rehabilitation devices, such as an assistive arm fixed to a wheelchair, for use by patients suffering from Spinal Cord Injury (SCI) (Corbett et al, 2013, 2014).…”
Section: Discussionmentioning
confidence: 99%
“…When maintaining gaze on the target object, the geometry in a dynamic scene is also unstable due to micro-saccades. This is why a filtering of gaze fixation signal along the time is needed (González-Díaz et al, 2019). Moreover, today a localization of objects in a gaze-predicted area can be solved together with an object-recognition task, employing powerful deep CNN classifiers.…”
Section: Proofs Of Conceptmentioning
confidence: 99%
“…Additionally, we extend our experimental work by studying alternative dimensionality reduction techniques in order to examine the validity of our assumptions. Furthermore, we applied camera ego-motion compensation, as in [20], to examine the improvement it may bestow upon our best models for both descriptors. We accumulated and present activity recognition results for each class, in the form of confusion matrices, and examine how each class performs depending on object detection performance.…”
Section: Experimental Workmentioning
confidence: 99%
“…This is also the case of lifeLog data [23]. While for specific tasks in lifelog data new Deep Architectures are being designed [24], the standard backbones such as ResNet [25] are suitable for recognition of egocentric scenes [26]. Nevertheless, data from risky situations are not very frequent in typical lifeLog data or in commonly used datasets [27].…”
Section: Deep Learning In Lifelog Visual Content Miningmentioning
confidence: 99%