Proceedings of the 2018 ACM International Symposium on Wearable Computers 2018
DOI: 10.1145/3267242.3267287
|View full text |Cite
|
Sign up to set email alerts
|

On attention models for human activity recognition

Abstract: Most approaches that model time-series data in human activity recognition based on body-worn sensing (HAR) use a fixed size temporal context to represent different activities. This might, however, not be apt for sets of activities with individually varying durations. We introduce attention models into HAR research as a data driven approach for exploring relevant temporal context. Attention models learn a set of weights over input data, which we leverage to weight the temporal context being considered to model … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
64
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 112 publications
(64 citation statements)
references
References 10 publications
0
64
0
Order By: Relevance
“…One drawback of the DeepConvLSTM is that it potentially assumes the signals in all time steps are relevant and contribute equally to the target activity, which may not true. Murahari et al [11] propose to solve the problem by integrating the temporal attention module to DeepConvLSTM. The attention module aligns the output vector at the last time step with other vectors at earlier steps to learn a relative importance score for each previous time step.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…One drawback of the DeepConvLSTM is that it potentially assumes the signals in all time steps are relevant and contribute equally to the target activity, which may not true. Murahari et al [11] propose to solve the problem by integrating the temporal attention module to DeepConvLSTM. The attention module aligns the output vector at the last time step with other vectors at earlier steps to learn a relative importance score for each previous time step.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to the previous data representation models, the dual-stream representation module is more accurate by encapsulating both spatial and temporal correlations jointly. Besides, it is more light-weight and easy-to-train compared to LSTM-based approaches [8,11,12].…”
Section: Dual-stream Representation Modulementioning
confidence: 99%
See 2 more Smart Citations
“…Their visualization of the attention scores satisfied the expectation of subset learning of sensors at important moments. Along with the same route, Murahari et al [20] focused on temporal attention, which they embedded at the end of a convolutional LSTM network (Conv-LSTM) [25]. They used tanh and softmax functions to compute the attention scores with the LSTM outputs, the weighted sum of all previous LSTM hidden states instead of only the last hidden state, was used for classification.…”
Section: B Attention Mechanism Adapted For Har On Mocap Datamentioning
confidence: 99%