2022
DOI: 10.1162/tacl_a_00463
|View full text |Cite
|
Sign up to set email alerts
|

Towards General Natural Language Understanding with Probabilistic Worldbuilding

Abstract: We introduce the Probabilistic Worldbuilding Model (PWM), a new fully symbolic Bayesian model of semantic parsing and reasoning, as a first step in a research program toward more domain- and task-general NLU and AI. Humans create internal mental models of their observations that greatly aid in their ability to understand and reason about a large variety of problems. In PWM, the meanings of sentences, acquired facts about the world, and intermediate steps in reasoning are all expressed in a human-readable forma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…One other aspect of LLMs that may limit them when applied to reasoning is that they produce natural language text, which may be ambiguous and diverse. These considerations motivate us to study MSPs as structured representations of meaning, which can in turn be used to generate natural language (Saparov and Mitchell, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…One other aspect of LLMs that may limit them when applied to reasoning is that they produce natural language text, which may be ambiguous and diverse. These considerations motivate us to study MSPs as structured representations of meaning, which can in turn be used to generate natural language (Saparov and Mitchell, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…There are two broad research approaches for reasoning in NLP: (1) reasoning over a formal symbolic language, possibly with neuro-symbolic methods and/or semantic parsing (Saparov & Mitchell, 2022;Kapanipathi et al, 2021;Dong et al, 2019;Rocktäschel & Riedel, 2017), or (2) reasoning directly over natural language (Chen et al, 2021;Bostrom et al, 2022;Welleck et al, 2021;Bhagavatula et al, 2020). While PRONTOQA is generated from symbolic ontologies, the examples themselves are in natural language, and so provides value to both research approaches.…”
Section: Related Workmentioning
confidence: 99%
“…It suggests that people solve situated problems by elaborating a mental picture of the situation, including elements that may be peripheral to a specific answer, rather than by constructing a deductive proof from a few key facts to an answer (Byrne, 1991). Recently, Saparov and Mitchell (2022) tried creating an internal "mental model" using a set of axioms that deductively explain the observations. Other studies in AI have attempted to identify such elements by studying what questions people naturally ask when reading text (Ko et al, 2020) or viewing images (Mostafazadeh et al, 2016).…”
Section: Related Workmentioning
confidence: 99%