2014
DOI: 10.1007/978-3-319-08979-9_37
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Long Short-Term Memory Networks for Various Pattern Recognition Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…These results have outperformed the state-of-theart either for the case of using just one training signature (1vs1) [13] or the case of performing the average score of the four one-to-one comparisons (4vs1) [22]. In addition, it is important to highlight the results obtained in this work compared to the ones obtained by Otte et al in [10] where all experiments failed obtaining a 23.75% EER for the best case. In that work, standard LSTM architectures seemed not to be appropriate for the task of signature verification.…”
Section: Discussionmentioning
confidence: 46%
See 2 more Smart Citations
“…These results have outperformed the state-of-theart either for the case of using just one training signature (1vs1) [13] or the case of performing the average score of the four one-to-one comparisons (4vs1) [22]. In addition, it is important to highlight the results obtained in this work compared to the ones obtained by Otte et al in [10] where all experiments failed obtaining a 23.75% EER for the best case. In that work, standard LSTM architectures seemed not to be appropriate for the task of signature verification.…”
Section: Discussionmentioning
confidence: 46%
“…The LSTM RNNs proposed in that work seemed to authenticate genuine and impostor cases very well. However, as it was pointed out in [10], the method proposed in that work for training the LSTM RNNs is not feasible for real applications for various reasons. First, the authors considered the same users for both development and evaluation of the system.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…LSTM has shown to be successful in sequence prediction (Adi et al, 2016), sequence labeling (Sak and Beaufays, 2014), syntactic structure (Linzen et al, 2016) and long range semantic dependencies (He et al, 2017). LSTM's have an edge over CNN and Recurrent Neural Networks (RNN) in many ways (Otte et al, 2014). RNNs are effective when working with short term dependencies, but fail to recognize context or chronologically widely spaced input events or long-term dependencies.…”
Section: Long Short Term Memorymentioning
confidence: 99%
“…One of the first studies that analysed the potential of current deep learning approaches for on-line signature verification was [21]. In that work, Otte et al performed an exhaustive analysis of Long Short-Term Memory (LSTM) RNNs using a total of 20 users and 12 genuine signatures per user for training.…”
Section: On-line Signature Verificationmentioning
confidence: 99%