Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1452
|View full text |Cite
|
Sign up to set email alerts
|

Joint Multitask Learning for Community Question Answering Using Task-Specific Embeddings

Abstract: We address jointly two important tasks for Question Answering in community forums: given a new question, (i) find related existing questions, and (ii) find relevant answers to this new question. We further use an auxiliary task to complement the previous two, i.e., (iii) find good answers with respect to the thread question in a question-comment thread. We use deep neural networks (DNNs) to learn meaningful task-specific embeddings, which we then incorporate into a conditional random field (CRF) model for the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 35 publications
0
11
0
Order By: Relevance
“…[86] jointly trains a monolingual formality transfer model and a formality sensitive machine translation model between English and French. For community question answering, [52] builds an MTL model that extracts existing questions related to the current one and looks for question-comment threads that could answer the question at the same time. To analyze the argumentative structure of scientific publications, [57] optimizes argumentative component identification, discourse role classification, citation context identification, subjective aspect classification, and summary relevance classification together with a dynamic weighting mechanism.…”
Section: Joint Mtlmentioning
confidence: 99%
“…[86] jointly trains a monolingual formality transfer model and a formality sensitive machine translation model between English and French. For community question answering, [52] builds an MTL model that extracts existing questions related to the current one and looks for question-comment threads that could answer the question at the same time. To analyze the argumentative structure of scientific publications, [57] optimizes argumentative component identification, discourse role classification, citation context identification, subjective aspect classification, and summary relevance classification together with a dynamic weighting mechanism.…”
Section: Joint Mtlmentioning
confidence: 99%
“…Inspired by the success of multi-task learning in other NLP tasks, several attempts have been made to solve answer selection with different tasks. Moschitti, Bonadiman, and Uva (2017) and Joty, Màrquez, and Nakov (2018) enhance answer selection in CQA via multitask learning with the auxiliary tasks of question-question relatedness and question-comment relatedness. Yang et al (2019) leverage the question categorization to enhance the question representation learning for CQA.…”
Section: Related Workmentioning
confidence: 99%
“…Answer 1 and StackExchange 2 . Many studies have been made on different tasks in CQA, such as answer selection, question-question relatedness, and comment classification (Moschitti, Bonadiman, and Uva 2017;Joty, Màrquez, and Nakov 2018;Nakov et al 2017). However, due to the length and redundancy of answers in CQA scenario, there are several challenges that need to be tackled in real-world applications.…”
Section: Introductionmentioning
confidence: 99%
“…Reading Comprehension (RC), which aims to answer questions by comprehending the contexts of given passages, is a frontier topic in natural language processing research. Recently, many RC models Lin et al, 2018;Joty et al, 2018;Weber et al, 2019) have been proposed and have achieved considerable successes. According to answer prediction methods, the RC models can be roughly divided into two major categories: extractive and generative.…”
Section: Introductionmentioning
confidence: 99%