2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487401
|View full text |Cite
|
Sign up to set email alerts
|

Watch-Bot: Unsupervised learning for reminding humans of forgotten actions

Abstract: Abstract-We present a robotic system that watches a human using a Kinect v2 RGB-D sensor, detects what he forgot to do while performing an activity, and if necessary reminds the person using a laser pointer to point out the related object. Our simple setup can be easily deployed on any assistive robot.Our approach is based on a learning algorithm trained in a purely unsupervised setting, which does not require any human annotations. This makes our approach scalable and applicable to variant scenarios. Our mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…This shows the great adaptability property of our framework because the two datasets are quite different in terms of points of view and actions involved. In Table 3, we report the ratio of correctly labeled frames (Accuracy) as in [28,29]. We observe a strong improvement compared to approaches of the literature, i.e.…”
Section: Approachmentioning
confidence: 86%
See 1 more Smart Citation
“…This shows the great adaptability property of our framework because the two datasets are quite different in terms of points of view and actions involved. In Table 3, we report the ratio of correctly labeled frames (Accuracy) as in [28,29]. We observe a strong improvement compared to approaches of the literature, i.e.…”
Section: Approachmentioning
confidence: 86%
“…Accuracy CaTM [28] 38.5 WBTM [29] 41.2 PoT [24] 49.93 KMHIS [11] 59 Figure 7. Qualitative result on the sequence microwaving food on CAD-120.…”
Section: Approachmentioning
confidence: 99%
“…Second, co-occurring actions have variations in temporal orderings, e.g., people can first put-milk-back-to-fridge then microwave-milk instead of the inverse order in the above example, as its ordering is more relevant to the action fetch-milkfrom-fridge. Moreover, these ordering relations could exist in both short-range and long-range, e.g., pouring is followed by drink while sometimes fetch-book is related to put-back-book with a Parts of this work have been published in [57], [58] as the conference version. Fig.…”
Section: Introductionmentioning
confidence: 99%
“…The bag of words paradigm ignores the location of features during feature extraction and attained success in action classification [videos in 7,5]. Most research is based on the classification of the activities after the complete observation of entire video sequence.…”
Section: Figure 1: Taxonomy Of Approaches To Human Action Recognitionmentioning
confidence: 99%
“…In the past five years approach using Spatio Temporal features obtained success in recognizing activities [7]. The bag of words paradigm ignores the location of features during feature extraction and attained success in action classification [videos in 7,5].…”
Section: Figure 1: Taxonomy Of Approaches To Human Action Recognitionmentioning
confidence: 99%