2016 IEEE 16th International Conference on Data Mining (ICDM) 2016
DOI: 10.1109/icdm.2016.0060
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems

Abstract: Abstract-Product review websites provide an incredible lens into the wide variety of opinions and experiences of different people, and play a critical role in helping users discover products that match their personal needs and preferences. To help address questions that can't easily be answered by reading others' reviews, some review websites also allow users to pose questions to the community via a question-answering (QA) system. As one would expect, just as opinions diverge among different reviewers, answers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
61
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 75 publications
(62 citation statements)
references
References 16 publications
(39 reference statements)
1
61
0
Order By: Relevance
“…The closely-following second and third most common reasons are answer granularity (GRN) and synonyms (SYN) which account for 72.9% and 68.3% of VQAs across both datasets ( Figure 3c; 2 person threshold). These findings highlight that most answer differences can be resolved by disambiguating visual questions or resolving synonyms and differing granularity [16,27,43].…”
Section: (Un)common Reasons For Answer Differencesmentioning
confidence: 99%
See 1 more Smart Citation
“…The closely-following second and third most common reasons are answer granularity (GRN) and synonyms (SYN) which account for 72.9% and 68.3% of VQAs across both datasets ( Figure 3c; 2 person threshold). These findings highlight that most answer differences can be resolved by disambiguating visual questions or resolving synonyms and differing granularity [16,27,43].…”
Section: (Un)common Reasons For Answer Differencesmentioning
confidence: 99%
“…We offer our work as a valuable foundation for improving VQA services, by empowering system designers and users to know how to prevent, interpret, or resolve answer differences. Specifically, a solution that anticipates why a visual question will lead to different answers (summarized in Figure 1) could (1) help users identify how to modify their visual question in order to arrive at a single, unambiguous answer; e.g., retake an image when it is low quality or does not show the answer versus modify the question when it is ambiguous or invalid; (2) increase users' awareness for what reasons, if any, trigger answer differences when they are given a single answer; or (3) reveal how to automatically aggregate different answers [2,19,24,26,43] when multiple answers are collected.…”
Section: Introductionmentioning
confidence: 99%
“…All datasets are in the form of (input, response) pairs. For UBUNTU 8 , SEMEVAL15 9 , and AMAZONQA 10 we use standard data splits into training, dev, and test portions following the original work (Lowe et al, 2017;Nakov et al, 2015;Wan and McAuley, 2016). For the OpenSubtitles dataset (OPENSUB) (Lison and Tiedemann, 2016), we rely on the data splits introduced by Henderson et al (2019).…”
Section: Methodsmentioning
confidence: 99%
“…The statistics are shown in Table 1. The original question-answer pairs are from a public data collection crawled by Wan and McAuley [34]. We also utilize the product ID in the QA dataset to align with the reviews in Amazon review dataset [14] so that the corresponding reviews of each product can be obtained.…”
Section: Datasets and Evaluation Metricsmentioning
confidence: 99%