2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE) 2019
DOI: 10.1109/aike.2019.00016
|View full text |Cite
|
Sign up to set email alerts
|

Aspect Detection using Word and Char Embeddings with (Bi) LSTM and CRF

Abstract: We proposed a new accurate aspect extraction method that makes use of both word and characterbased embeddings. We have conducted experiments of various models of aspect extraction using LSTM and BiLSTM including CRF enhancement on five different pre-trained word embeddings extended with character embeddings. The results revealed that BiLSTM outperforms regular LSTM, but also word embedding coverage in train and test sets profoundly impacted aspect detection performance. Moreover, the additional CRF layer consi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 24 publications
0
11
0
2
Order By: Relevance
“…Lapisan ini akan melakukan klasifikasi berita sesuai dengan inputan yang diterima dari lapisan sebelumnya. Pada lapisan ini terdiri dari LSTM yang tersusun dari gerbang (gates) yaitu forget gate, input gate, dan gerbang keluaran output gate yang berguna untuk mengontrol keseimbangan input yang akan dimasukan ke memory cell dan keseimbangan dari keadaan sebelumnya yang akan hapus [14]. Proses di balik lapisan Bi-LSTM adalah membagi unit menjadi dua bagian.…”
Section: Bi-lstm Layerunclassified
See 1 more Smart Citation
“…Lapisan ini akan melakukan klasifikasi berita sesuai dengan inputan yang diterima dari lapisan sebelumnya. Pada lapisan ini terdiri dari LSTM yang tersusun dari gerbang (gates) yaitu forget gate, input gate, dan gerbang keluaran output gate yang berguna untuk mengontrol keseimbangan input yang akan dimasukan ke memory cell dan keseimbangan dari keadaan sebelumnya yang akan hapus [14]. Proses di balik lapisan Bi-LSTM adalah membagi unit menjadi dua bagian.…”
Section: Bi-lstm Layerunclassified
“…Proses di balik lapisan Bi-LSTM adalah membagi unit menjadi dua bagian. Pertama yaitu forward states yang bertanggung jawab memproses informasi berdasarkan arah waktu positif atau kedepan, sedangkan yang kedua yaitu backward states memproses informasi berdasarkan arah waktu negatif atau mundur [14]. Pada penelitian ini, parameter yang digunakan pada lapisan Bi-LSTM yaitu units sebanyak 64, 128, dan 256.…”
Section: Bi-lstm Layerunclassified
“…LSTMs are a Recurrent Neural Networks (RNN) (Medsker and Jain, 2001) which have an internal memory that allows them to store the information learned during training. LSTMs are frequently used in the reverse dictionary task (Sherstinsky, 2018) and in word and sentence embeddings tasks in general (Augustyniak, Kajdanowicz and Kazienko, 2019;Liu et al, 2020), as they can learn long-term dependencies between existing words in the sentence and thus compute context representation vectors for each word. BiLSTM for its part, is a variant of LSTMs, it allows a bidirectional representation of words (Augustyniak, Kajdanowicz and Kazienko, 2019).…”
Section: Advanced Modelmentioning
confidence: 99%
“…The representation may be extended with additional ontologies (Bloehdorn and Hotho, 2004) or WordNets (Scott and Matwin, 1998;Piasecki et al, 2009;Misiaszek et al, 2014;Janz et al, 2017;Kocoń et al, 2019b) and used with SVM (Razavi et al, 2010) or logistic regression models (Waseem and Hovy, 2016;Sahlgren et al, 2018;Kocoń et al, 2018;Kocoń and Maziarz, 2021). New methods often use word embeddings (Wiegand et al, 2018;Łukasz Augustyniak et al, 2021) (Wiegand et al, 2018; mixed with character embeddings (Augustyniak et al, 2019), together with deep neural networks, e.g. CNN (Zampieri et al, 2019a) or LSTM (Yenala et al, 2017).…”
Section: Related Workmentioning
confidence: 99%