Proceedings of the 26th International Conference on World Wide Web 2017
DOI: 10.1145/3038912.3052599
|View full text |Cite
|
Sign up to set email alerts
|

User Personalized Satisfaction Prediction via Multiple Instance Deep Learning

Abstract: Community-based question answering(CQA) services have arisen as a popular knowledge sharing pattern for netizens. With abundant interactions among users, individuals are capable of obtaining satisfactory information. However, it is not effective for users to attain answers within minutes. Users have to check the progress over time until the satisfying answers submitted. We address this problem as a user personalized satisfaction prediction task. Existing methods usually exploit manual feature selection. It is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…Chen et al [45] proposed an approach to predict users' personalized satisfaction using multiple instance DL frameworks. The authors presented a novel model, Multiple Instance Deep Learning (MIDL) framework to predict personalized user satisfaction.…”
Section: ) Ranking Using DLmentioning
confidence: 99%
See 1 more Smart Citation
“…Chen et al [45] proposed an approach to predict users' personalized satisfaction using multiple instance DL frameworks. The authors presented a novel model, Multiple Instance Deep Learning (MIDL) framework to predict personalized user satisfaction.…”
Section: ) Ranking Using DLmentioning
confidence: 99%
“…There a number of ways to evaluate the performance of DL methods. The recall, precision, and F1-score are a noteworthy performance measure for prediction, ranking and classification-based tasks [38], [45]. The F1-score stabilizes the recall and precision as:…”
Section: Performance Evaluationmentioning
confidence: 99%
“…An illustration is provided in Figure 2. We note that: (a) LSTM has been widely studied and has demonstrated a superb performance in dealing with long sequences [20]; and (b) some alternative variant RNNs can also be adopted here, such as GRU [9,16] and bidirectional RNNs [7,25]. However, these are not the core parts of the current work.…”
Section: Deeptsci Frameworkmentioning
confidence: 99%
“…Large and complicated networks have been successful in many natural language processing tasks (Zhu et al, 2017;Chen et al, 2017e;Pan et al, 2017a). Recently, Bowman et al (2015) released Stanford Natural language Inference (SNLI) dataset, which is a high-quality and large-scale benchmark, thus inspired many significant works (Bowman et al, 2016;Mou et al, 2016;Vendrov et al, 2016;Conneau et al, 2017;Gong et al, 2018;McCann et al, 2017;Chen et al, 2017b;Choi et al, 2017;Tay et al, 2017).…”
Section: Natural Language Inferencementioning
confidence: 99%