Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512144
|View full text |Cite
|
Sign up to set email alerts
|

GraphNLI: A Graph-based Natural Language Inference Model for Polarity Prediction in Online Debates

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…where GraphEncoder t (β€’) indicate that one of its layers has been modified by t. To be more specific, a pretext token t βŸ¨π‘˜ ⟩,𝑙 will modify the 𝑙-th layer of the graph encoder into t βŸ¨π‘˜ ⟩,𝑙 βŠ™ H 𝑙 with an elementwise multiplication, where we multiply the pretext token t βŸ¨π‘˜ ⟩,𝑙 with each row of H 𝑙 element-wise 1 . Subsequently, when 𝑙 < 𝐿, the next layer will be generated as…”
Section: Multi-task Pre-trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…where GraphEncoder t (β€’) indicate that one of its layers has been modified by t. To be more specific, a pretext token t βŸ¨π‘˜ ⟩,𝑙 will modify the 𝑙-th layer of the graph encoder into t βŸ¨π‘˜ ⟩,𝑙 βŠ™ H 𝑙 with an elementwise multiplication, where we multiply the pretext token t βŸ¨π‘˜ ⟩,𝑙 with each row of H 𝑙 element-wise 1 . Subsequently, when 𝑙 < 𝐿, the next layer will be generated as…”
Section: Multi-task Pre-trainingmentioning
confidence: 99%
“…The Web has evolved into an universal data repository, linking an expansive array of entities to create vast and intricate graphs. Mining such widespread graph data has fueled a myriad of Web applications, ranging from Web mining [1,54] and social network analysis [63,65] to content recommendation [35,66]. Contemporary techniques for graph analysis predominantly rely on graph representation learning, particularly graph neural networks (GNNs) [12,13,23,43,60].…”
Section: Introductionmentioning
confidence: 99%
“…Thus, we define the refutation reward π‘Ÿ π‘Ÿπ‘’ 𝑓 π‘’π‘‘π‘Žπ‘‘π‘–π‘œπ‘› to reward the actions that increase refutation of Δ‰ and penalize actions that decrease the refutation of Δ‰. Following similar disbelief and polarity classification research works [2,41], we build the refutation classifier 𝑓 π‘Ÿπ‘’ 𝑓 π‘’π‘‘π‘Žπ‘‘π‘–π‘œπ‘› using BERT [20] which measures whether the text expresses refutation. However, distinct from Jiang et al [41], who only use the response text for classification, we use both the tweet and generated response as input.…”
Section: 14mentioning
confidence: 99%
“…To cope with the limitations in terms of the data that an instance has available, we take a conversational approach 2 . We capture the conversational context of each post using GraphNLI (Agarwal et al 2022), a state-of-the-art graph learning-based framework. This is in stark contrast to previous works (Bin Zia et al 2022;Kurita, Belova, and Anastasopoulos 2019;Risch and Krestel 2020) that look at each post in isolation.…”
Section: Introductionmentioning
confidence: 99%