Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1375
|View full text |Cite
|
Sign up to set email alerts
|

A Genre-Aware Attention Model to Improve the Likability Prediction of Books

Abstract: Likability prediction of books has many uses. Readers, writers, as well as the publishing industry, can all benefit from automatic book likability prediction systems. In order to make reliable decisions, these systems need to assimilate information from different aspects of a book in a sensible way. We propose a novel multimodal neural architecture that incorporates genre supervision to assign weights to individual feature types. Our proposed method is capable of dynamically tailoring weights given to feature … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…The computation of such an embedding can be seen as a form of feature selection, and as such, it can be applied to any set of features sharing the same representation. This applies to cases where features come from different domains as in multimodal tasks [78] or from different levels of a neural architecture [38] or where they simply represent different aspects of the input document [136]. Similarly, attention can also be exploited as an auxiliary task during training so that specific features can be modeled via a multitask setting.…”
Section: Uses Of Attentionmentioning
confidence: 99%
See 1 more Smart Citation
“…The computation of such an embedding can be seen as a form of feature selection, and as such, it can be applied to any set of features sharing the same representation. This applies to cases where features come from different domains as in multimodal tasks [78] or from different levels of a neural architecture [38] or where they simply represent different aspects of the input document [136]. Similarly, attention can also be exploited as an auxiliary task during training so that specific features can be modeled via a multitask setting.…”
Section: Uses Of Attentionmentioning
confidence: 99%
“…However, the input can also be other things, such as a juxtaposition of features or relevant aspects of the same textual element. For instance, Li et al [56] and Zadeh et al [78] considered the inputs composed of different sources, and in [136] and [139], the input represents different aspects of the same document. In that case, embeddings of the input can be collated together and fed into an attention model as multiple keys, as long as the embeddings share the same representation.…”
Section: A Input Representationmentioning
confidence: 99%
“…It emphasizes the reasonable allocation of limited computing power when facing problems ( Chaudhari et al, 2019 ). Due to the excellent effect, this mechanism has made breakthroughs in NLP ( Maharjan et al, 2018 ), and computer vision (CV) ( Li et al, 2019 ). At the same time, attention mechanism is also introduced in transportation research.…”
Section: Related Workmentioning
confidence: 99%
“…Then, attention scores were computed for these encodings. More recently, authors of [41] place an attention layer on top of several modality-specific feature encoding layers to model the importance of different modalities in book genre prediction. There are many other works [20,35,39,40] that leverage this technique, i.e.…”
Section: Related Workmentioning
confidence: 99%