Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization 2021
DOI: 10.1145/3450614.3461684
|View full text |Cite
|
Sign up to set email alerts
|

Wearable System for Personalized and Privacy-preserving Egocentric Visual Context Detection using On-device Deep Learning

Abstract: Wearable egocentric visual context detection raises privacy concerns and is rarely personalized or on-device. We created a wearable system, called PAL, with on-device deep learning so that the user images do not have to be sent to the cloud for processing, and can be processed on-device in a real-time, offline, and privacy-preserving manner. PAL enables human-in-the-loop context labeling using wearable audio input/output and a mobile/web application. PAL uses on-device deep learning models for object and face … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Steil et al (2019) presented a system which shuts off the video stream when sensitive visual content is detected, only to reactivate it based on the analysis of eye movements recorded by additional eye tracking cameras. Along the same lines, Khan et al (2021) proposed a deep learning based device which detects usercustomised privacy-sensitive content such as objects and faces of specific people in order to serve as a privacy filter, blocking images which do not satisfy the established privacy constraints. Qiu et al (2023) investigated how converting images into rich text descriptions can serve as an effec-tive privacy-preserving approach for passive dietary intake monitoring from egocentric images, as compared to directly storing the input images.…”
Section: Privacy Preserving By Designmentioning
confidence: 99%
“…Steil et al (2019) presented a system which shuts off the video stream when sensitive visual content is detected, only to reactivate it based on the analysis of eye movements recorded by additional eye tracking cameras. Along the same lines, Khan et al (2021) proposed a deep learning based device which detects usercustomised privacy-sensitive content such as objects and faces of specific people in order to serve as a privacy filter, blocking images which do not satisfy the established privacy constraints. Qiu et al (2023) investigated how converting images into rich text descriptions can serve as an effec-tive privacy-preserving approach for passive dietary intake monitoring from egocentric images, as compared to directly storing the input images.…”
Section: Privacy Preserving By Designmentioning
confidence: 99%
“…We performed technical evaluations of PAL. We tested PAL's deep learning models with 4 participants for 2 days each (~1000 in-the-wild images) [19]. Each model had over 80% accuracy -Object Detection: 98.8% (F1 = 0.79, ~1000 instances); Face Detection: 88.8% (F1 = 0.9, ~180 instances); Custom Face Recognition: 86.9% (4 faces, 120 instances); Custom Contexts: 87.2% (7 activities, ~350 images); Custom Clusters: 82% (19 contexts, ~300 images).…”
Section: Evaluations Applications and Future Workmentioning
confidence: 99%
“…PAL also supports user input for human-in-the-loop training of personalized visual contexts. We used on-device models for generic object and face detection, personalized and lowshot custom face and recognition, and semi-supervised custom context clustering [19]. Compared to existing wearable systems, which use at least 100 training images per custom context [21] and do not use privacy-preserving ondevice deep learning, PAL's on-device models for low-shot and continual learning use ~10 training images per context.…”
Section: Introductionmentioning
confidence: 99%