Proceedings of the 2015 ACM International Symposium on Wearable Computers - ISWC '15 2015
DOI: 10.1145/2802083.2808398
|View full text |Cite
|
Sign up to set email alerts
|

Predicting daily activities from egocentric images using deep learning

Abstract: We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
73
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 66 publications
(73 citation statements)
references
References 28 publications
0
73
0
Order By: Relevance
“…As such our work cannot be directly compared against these; instead we show comparisons among various CNN-based models trained on eye images and against human ratings to benchmark our results. Most attempts at personalization use per-subject samples and quick retraining [8,39]. However there has also been some work at personalizing expression classification without retraining with new samples [9] based on unsupervised generalization with STM.…”
Section: Related Workmentioning
confidence: 99%
“…As such our work cannot be directly compared against these; instead we show comparisons among various CNN-based models trained on eye images and against human ratings to benchmark our results. Most attempts at personalization use per-subject samples and quick retraining [8,39]. However there has also been some work at personalizing expression classification without retraining with new samples [9] based on unsupervised generalization with STM.…”
Section: Related Workmentioning
confidence: 99%
“…However, activity recognition from first-person (egocentric) photo-streams has received relatively little attention in the literature [5,4,18,3]. One of its major challenges is that photo-streams are characterized by a very low frame-rate, and consequently useful important features such as optical flow cannot be reliably estimated.…”
Section: Introductionmentioning
confidence: 99%
“…*Correspondence: kawa@faculty.chiba-u.jp 1 NIFTY Corporation, 2-21-1, Kita-Shinjuku, Shinjuku 169-8333, Tokyo, Japan Full list of author information is available at the end of the article [11] actually collected 40,000 images in 26 weeks by recording with a wearable camera and annotated all of the images.…”
Section: Introductionmentioning
confidence: 99%