2022
DOI: 10.1016/j.inffus.2021.11.006
|View full text |Cite
|
Sign up to set email alerts
|

Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
152
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 410 publications
(155 citation statements)
references
References 225 publications
2
152
0
1
Order By: Relevance
“…Although it has recently been demonstrated that using multisensor data fusion has higher recognition accuracy for complex human movements than a single sensor [ 15 ], our study aimed to examine the trajectory of sway during quiet stance, which had no movement complexity. Therefore, we positioned a single sensor in the middle of the user's waist, at the L5 vertebra in the lumbar area of the trunk, as evidenced by a previous study that indicated that the central sensors had a higher recognition rate than the left and right ones [ 23 ].…”
Section: Discussionmentioning
confidence: 99%
“…Although it has recently been demonstrated that using multisensor data fusion has higher recognition accuracy for complex human movements than a single sensor [ 15 ], our study aimed to examine the trajectory of sway during quiet stance, which had no movement complexity. Therefore, we positioned a single sensor in the middle of the user's waist, at the L5 vertebra in the lumbar area of the trunk, as evidenced by a previous study that indicated that the central sensors had a higher recognition rate than the left and right ones [ 23 ].…”
Section: Discussionmentioning
confidence: 99%
“…A key issue that the community is paying increasing attention to is model robustness and reliability [308,309]. One common way to improve robustness is to leverage the benefits of multiple types of sensors together to create multi-sensory systems [249,[310][311][312][313][314].…”
Section: Model Robustnessmentioning
confidence: 99%
“…Dedicated pre-processing pipelines will be applied for different sensor types, but fusion across sensors will provide a spatially and temporally contextualised overview of ADL behaviours and anomalies. We will explore which sensors are best combined for different activities and at what level of integration, and also the optimal stages at which to fuse activity with location data, ranging from early (naïve multimodal feature set integration) to late (hierarchical integration of posterior probabilities from deep learning models trained on individual feature sets) fusion strategies [23,56].…”
Section: Location and Activity Detection From Sphere Sensor Datamentioning
confidence: 99%
“…This sequential dependence of states can be modelled using Viterbi decoding, which tracks a maximum-likelihood path through a series of noisy, probabilistic transitions in a Hidden Markov Model (HMM) [56,57]. We will compare HMM activity classification and localization performance to deep learning architectures capable of leveraging spatial and temporal relationships in the multimodal feature space such as convolutional neural networks and long short-term memory networks [23]. We will compare these with pipelines exploiting both activity and location data together for further disambiguation of behaviour.…”
Section: Location and Activity Detection From Sphere Sensor Datamentioning
confidence: 99%
See 1 more Smart Citation