2021
DOI: 10.1162/tacl_a_00357
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Content and Context with Deep Relational Learning

Abstract: Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In thi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 35 publications
0
18
0
Order By: Relevance
“…Note that this information is only used for pre-training. Other works using social information to analyze political bias (Li and Goldwasser, 2019;Nguyen et al, 2020;Pacheco and Goldwasser, 2021) augment the text with social information, however since this information can be difficult to obtain in real-time, we decided to investigate if it can be used as a distant supervision source for pre-training a language model. These pre-training tasks are then used for training a Multi-head Attention Network (MAN) which creates a bias-aware representation of the text.…”
Section: Adapted From Nytimes (Left)mentioning
confidence: 99%
“…Note that this information is only used for pre-training. Other works using social information to analyze political bias (Li and Goldwasser, 2019;Nguyen et al, 2020;Pacheco and Goldwasser, 2021) augment the text with social information, however since this information can be difficult to obtain in real-time, we decided to investigate if it can be used as a distant supervision source for pre-training a language model. These pre-training tasks are then used for training a Multi-head Attention Network (MAN) which creates a bias-aware representation of the text.…”
Section: Adapted From Nytimes (Left)mentioning
confidence: 99%
“…The pseudo-code for the structured learning procedure can be observed in Algorithm 1. We implemented our models using DRaiL (Pacheco and Goldwasser, 2020), a declarative deep structured prediction framework built on PyTorch, and extended it to support our randomized inference procedures 1 .…”
Section: Learningmentioning
confidence: 99%
“…In addition to modeling the narrative structure in the embedding space, we add a symbolic inference procedure to capture structural dependencies in the output space for the StoryCommonsense task. To model these dependencies, we use DRaiL (Pacheco and Goldwasser, 2021), a neural-symbolic framework that allows us to define probabilistic logical rules on top of neural network potentials.…”
Section: Symbolic Inferencementioning
confidence: 99%
“…This way, all neural parameters are updated to optimize the global objective. Additional details can be found in (Pacheco and Goldwasser, 2021). To score weighted rules, we used feed-forward networks over the node embeddings obtained by the objectives outlined in Sec.…”
Section: Symbolic Inferencementioning
confidence: 99%
See 1 more Smart Citation