Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1017
|View full text |Cite
|
Sign up to set email alerts
|

Conversational Flow in Oxford-style Debates

Abstract: Public debates are a common platform for presenting and juxtaposing diverging views on important issues. In this work we propose a methodology for tracking how ideas flow between participants throughout a debate. We use this approach in a case study of Oxfordstyle debates-a competitive format where the winner is determined by audience votes-and show how the outcome of a debate depends on aspects of conversational flow. In particular, we find that winners tend to make better use of a debate's interactive compon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
75
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 53 publications
(79 citation statements)
references
References 20 publications
4
75
0
Order By: Relevance
“…Att means the model has the attention mechanism from Section 3.2.1; Reg means the model uses the optimization objective from Equation 9 (all other models use the optimization objective from Equation 4); Drpt means the model uses dropout (a popular regularization technique for neural networks (Srivastava et al, 2014)) of 0.5. We compare our results against the best models from Zhang et al (2016). Each model uses a Logistic Regression (LR) classifier, and distinguishes itself by the features it uses.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Att means the model has the attention mechanism from Section 3.2.1; Reg means the model uses the optimization objective from Equation 9 (all other models use the optimization objective from Equation 4); Drpt means the model uses dropout (a popular regularization technique for neural networks (Srivastava et al, 2014)) of 0.5. We compare our results against the best models from Zhang et al (2016). Each model uses a Logistic Regression (LR) classifier, and distinguishes itself by the features it uses.…”
Section: Resultsmentioning
confidence: 99%
“…The model uses an attention mechanism that creates a weighted sum over all hidden states. In order to achieve state-of-the-art results on a corpus of debate transcripts (Zhang et al, 2016), we regularize the RNN model by propagating errors based on implicit audience reaction. Our results show that this regularization technique is critical for obtaining a state-of-the-art result.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations