Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.463
|View full text |Cite
|
Sign up to set email alerts
|

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data

Abstract: The success of the large neural language models on many NLP tasks is exciting. However, we find that these successes sometimes lead to hype in which these models are being described as "understanding" language or capturing "meaning". In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. In keeping with the ACL 2020 theme of "Taking Stock of Where We've Been and Where We're Going", we argue that a clear understanding of the distinction between form and meaning… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

13
366
1
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 571 publications
(480 citation statements)
references
References 44 publications
13
366
1
3
Order By: Relevance
“…Interestingly, GPT-3 also performs quite well on semantic and discourse reasoning tasks, such as resolving pronouns or predicting final words of sentences. In light of the impressive achievements of GPT-3, an interesting discussion has been unfolding regarding the ability of such language models to present a “general AI,” and whether the often attested lack of meaning or understanding of such models that mainly operate at the level of forms ( Bender and Koller, 2020 ) is even relevant. 1 Indeed, the fascinating performance in many tasks demonstrates that many patterns underlying semantically coherent language use can be extracted and synthesized when scaling up the models and the data.…”
Section: Computational Models Of Human-agent Communicationmentioning
confidence: 99%
“…Interestingly, GPT-3 also performs quite well on semantic and discourse reasoning tasks, such as resolving pronouns or predicting final words of sentences. In light of the impressive achievements of GPT-3, an interesting discussion has been unfolding regarding the ability of such language models to present a “general AI,” and whether the often attested lack of meaning or understanding of such models that mainly operate at the level of forms ( Bender and Koller, 2020 ) is even relevant. 1 Indeed, the fascinating performance in many tasks demonstrates that many patterns underlying semantically coherent language use can be extracted and synthesized when scaling up the models and the data.…”
Section: Computational Models Of Human-agent Communicationmentioning
confidence: 99%
“…Of course, there is much to dislike about BERT and its ilk, primarily the fact that it lacks not only any kind of communicative goals, but any links to real-world meanings at all (Bender & Koller, 2020): Words are represented as vectors that capture their distributional similarity to other words (a kind of souped-up Latent Semantic Analysis), albeit in a context-dependent fashion (e.g. table would have different vectors in the input string He sat at the table and See Table 1 for details ).…”
Section: Can You Tell Me How To Get How To Get To   Abstract Reprmentioning
confidence: 99%
“…Moreover, Ettinger (2020) found that the popular BERT model (Devlin et al, 2019) completely failed to acquire a general understanding of negation. Related, Bender and Koller (2020) contend that meaning cannot be learned from form alone, and argue for approaches that focus on grounding the language (communication) in the real world. We believe formal meaning representations therefore have an important role to play in future semantic applications, as semantic parsers produce an explicit model of a real-world interpretation.…”
Section: Introductionmentioning
confidence: 99%