2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids) 2015
DOI: 10.1109/humanoids.2015.7363556
|View full text |Cite
|
Sign up to set email alerts
|

Efficient aspect object models using pre-trained convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…This was later extended to a Bayesian model able to provide localization uncertainty , establishing a critical step forward for mobile robots, especially connecting close ties to algorithms that operate under uncertainty. In work by Wilkinson and Takahashi (2015), a pretrained convolutional network was used to predict object descriptions and aspect definitions pertaining to sensory geometries in relation to objects. Unfortunately, class and object descriptors were selected as arbitrary pretrained AlexNet layers and the overall framework relied on a number of thresholds that are difficult to define.…”
Section: Detection Estimation and Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…This was later extended to a Bayesian model able to provide localization uncertainty , establishing a critical step forward for mobile robots, especially connecting close ties to algorithms that operate under uncertainty. In work by Wilkinson and Takahashi (2015), a pretrained convolutional network was used to predict object descriptions and aspect definitions pertaining to sensory geometries in relation to objects. Unfortunately, class and object descriptors were selected as arbitrary pretrained AlexNet layers and the overall framework relied on a number of thresholds that are difficult to define.…”
Section: Detection Estimation and Trackingmentioning
confidence: 99%
“…The technique localized features corresponding to high activations given point clouds of simple household objects through targeted backpropagation. Using this, they presented a hierarchical controller composing of finger and palm pre-posture positions on the R2 robot, however, alike work by Wilkinson and Takahashi (2015), the specific layer to obtain information from is still human defined.…”
Section: From Perception To Motor Controlmentioning
confidence: 99%
“…Operating in embedding space has become an attractive way to deal with highdimensional input, especially when similarity functions are difficult to define on the input space. Robotic sensors such as cameras and lasers have these traits, motivating an array of methods for visual retrieval [2], [24], and recognition [1], [29], [10]. Embedded representations for depth sensors in particular have been used to compute motor commands [23], estimate odometry [21], predict loop closure [17], and even predict the presence of glass [11].…”
Section: Introductionmentioning
confidence: 99%