2019
DOI: 10.1177/1550147719894532
|View full text |Cite
|
Sign up to set email alerts
|

Human action recognition based on low- and high-level data from wearable inertial sensors

Abstract: Human action recognition supported by highly accurate specialized systems, ambulatory systems, or wireless sensor networks has a tremendous potential in the areas of healthcare or wellbeing monitoring. Recently, several studies carried out focused on the recognition of actions using wearable inertial sensors, in which raw sensor data are used to build classification models, and in a few of them high-level representations are obtained which are directly related to anatomical characteristics of the human body. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 58 publications
0
7
0
Order By: Relevance
“…They combined DPI and att-DTIs through multi-stream deep neural networks and a late fusion scheme. Inertial sensor-based low-level and high-level features are used in [ 46 ] to categorize human actions acted by a performer in real time. Haider et al [ 47 ] introduced balanced, imbalanced, and super-bagging methods to recognize volleyball action.…”
Section: Related Workmentioning
confidence: 99%
“…They combined DPI and att-DTIs through multi-stream deep neural networks and a late fusion scheme. Inertial sensor-based low-level and high-level features are used in [ 46 ] to categorize human actions acted by a performer in real time. Haider et al [ 47 ] introduced balanced, imbalanced, and super-bagging methods to recognize volleyball action.…”
Section: Related Workmentioning
confidence: 99%
“…HAR monitoring using these sensors can characterize human movement (e.g., walking) given a set of observations. This process can be achieved by monitoring and analyzing walking information acquired from various sources such as the environment and sensors [18,19].…”
Section: Introductionmentioning
confidence: 99%
“…Many previous action recognition frameworks directly match low-level features with action class labels [13][14][15][16], in which abundant visual spatiotemporal information can hardly be generalized by the raw low-level features. To overcome the aforementioned drawback, recent works show that attributes built upon the raw low-level features can act as higher semantic concepts and bridge the gap between low-level features and action class labels.…”
Section: Introductionmentioning
confidence: 99%