2023
DOI: 10.1109/tiv.2023.3236458
|View full text |Cite
|
Sign up to set email alerts
|

Vehicle Control on an Uninstrumented Surface With an Off-the-Shelf Smartwatch

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…For training the datasets, we leveraged a set of recent deep neural networks, namely Conv1D [11,12], LSTM [13], and Transformer (i.e., self-attention-based classification model) [14]. Each of these models is adept at learning critical features from multivariate time-series data, significantly enhancing the accuracy of classifications and predictions [28][29][30]. The Conv1D and LSTM models are particularly proficient in processing sequence data, while the Transformer model, with its self-attention mechanism, has shown remarkable performance in various sequence understanding tasks.…”
Section: Network Architecture and Training Processmentioning
confidence: 99%
See 1 more Smart Citation
“…For training the datasets, we leveraged a set of recent deep neural networks, namely Conv1D [11,12], LSTM [13], and Transformer (i.e., self-attention-based classification model) [14]. Each of these models is adept at learning critical features from multivariate time-series data, significantly enhancing the accuracy of classifications and predictions [28][29][30]. The Conv1D and LSTM models are particularly proficient in processing sequence data, while the Transformer model, with its self-attention mechanism, has shown remarkable performance in various sequence understanding tasks.…”
Section: Network Architecture and Training Processmentioning
confidence: 99%
“…The selection of the number of LSTM layers and their respective sequence lengths T (ranging from 35 to 100) was guided by cross-validation to strike a balance between performance and computational efficiency. We stacked the recurrent cells twice (i.e., we used a stacked two-layer LSTM) with the number of recurrent units set to T, mirroring the length of the input signal x ∈ R T×D , following the approach adopted in previous studies [24,28]. The final recurrent hidden state then connects to a dense layer consisting of T units, functioning as a hidden layer within our network.…”
Section: Network Architecture and Training Processmentioning
confidence: 99%