Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of The 2020
DOI: 10.1145/3410530.3414320
|View full text |Cite
|
Sign up to set email alerts
|

Fine-grained activities recognition with coarse-grained labeled multi-modal data

Abstract: Fine-grained human activities recognition focuses on recognizing event-or action-level activities, which enables a new set of Internetof-Things (IoT) applications such as behavior analysis. Prior work on fine-grained human activities recognition relies on supervised sensing, which makes the fine-grained labeling labor-intensive and difficult to scale up. On the other hand, it is much more practical to collect coarse-grained label at the level of activity of daily living (e.g., cooking, working), especially for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…Traditional coarse-grained activities are concerned with scene-level information, usually involving discrete activities with highly distinct inter-class features. They do not include detailed features on continuous activities in applications [11]. These highly distinct inter-class features are presented in many popular benchmark activity datasets, such as KTH [12], UT-interaction [13], and UTKinect-Action3D [14].…”
Section: Related Workmentioning
confidence: 99%
“…Traditional coarse-grained activities are concerned with scene-level information, usually involving discrete activities with highly distinct inter-class features. They do not include detailed features on continuous activities in applications [11]. These highly distinct inter-class features are presented in many popular benchmark activity datasets, such as KTH [12], UT-interaction [13], and UTKinect-Action3D [14].…”
Section: Related Workmentioning
confidence: 99%
“…Physical vibration signals induced by people in the buildings are used to indirectly infer human information for both physical and physiology information, include and not limited to identity (Pan et al, 2017 ), location (Mirshekari et al, 2018 ; Drira et al, 2021 ), activity (Hu et al, 2020 ; Sun et al, 2020 ), heart beat (Jia et al, 2016 ), and gait (Fagert et al, 2019 ). The intuition is that people induce physical vibrations all the time, such as stepping on the floor, heart-pounding in the chest, etc.…”
Section: Related Workmentioning
confidence: 99%
“…System often require sensors to have overlapping sensing area to enable applications such as step-level localization (Mirshekari et al, 2018 ), gait analysis (Fagert et al, 2020 ), and activity recognition (Hu et al, 2020 ; Sun et al, 2020 ). On the other hands, for applications such as localization and gait analysis, sensor devices' locations in the room coordinates are also needed.…”
Section: Related Workmentioning
confidence: 99%
“…the occupant action. As a result, the captured vibration can be used to infer the building occupant information, such as presence [42], occupancy [43], identity [44], activity [30], and location [39]. The advantages of this sensing modality include NLOS sensing without much privacy concerns.…”
Section: Vibration-based Occupancy Sensingmentioning
confidence: 99%