2020
DOI: 10.1007/978-981-15-5093-5_11
|View full text |Cite
|
Sign up to set email alerts
|

Intent Classification in Question-Answering Using LSTM Architectures

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…In this paper, we focus on various issues related to the results obtained from training W2V. In our experience, also in obtaining W2V for the Italian language [12] and in its usage [11], we found that some important choices have become so common that they are used almost mechanically, without questioning about their effectiveness. More specifically, what is the correct number of epochs that need to be used before we can declare an embedding satisfactory?…”
Section: The Importance Of Learning Timementioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we focus on various issues related to the results obtained from training W2V. In our experience, also in obtaining W2V for the Italian language [12] and in its usage [11], we found that some important choices have become so common that they are used almost mechanically, without questioning about their effectiveness. More specifically, what is the correct number of epochs that need to be used before we can declare an embedding satisfactory?…”
Section: The Importance Of Learning Timementioning
confidence: 99%
“…In Natural Language Processing (NLP) problems approached with neural networks, individual words, that typically belong to large vocabularies, must be transformed into compressed representations. Although the state-of-the-art of NLP is today almost totally based on the use of Transformers [10,30,34], the difficulty of training such structures (both related to computational costs and the need for huge datasets) often leads to a preference for different approaches [5,11,17,18,26] where each word needs to be individually coded.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, search engines (Google), chatbots (SIRI, ALEXA, and CORTANA) are becoming better at going beyond by answering the exact answer to our question. The Question Answering System has also seen significant changes in the architecture from basic Recurrent Neural Network (RNN) to transformers [8,12] over the years.…”
Section: Introductionmentioning
confidence: 99%
“…Lastly, the output gate will decide which of the information is relevant enough to be a part of the output. The superior capabilities of LSTM have been recognized and implemented in several applications, such as machine translation [ 39 , 40 , 41 ], natural language processing [ 42 , 43 , 44 ], and time series analysis [ 45 , 46 , 47 , 48 ]. In this manuscript, the capabilities of both CNN and LSTM are motivating factors in our choice to employ both architectures for fall detection.…”
Section: Introductionmentioning
confidence: 99%