2022
DOI: 10.1007/s11229-022-03931-4
|View full text |Cite
|
Sign up to set email alerts
|

Understanding models understanding language

Abstract: Landgrebe and Smith (Synthese 198(March):2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…16. For scientific debates on whether AI possesses understanding, see (Mitchell and Krakauer 2023;Negrotti 2012;Søgaard 2022). 17.…”
Section: Orcid Idmentioning
confidence: 99%
“…16. For scientific debates on whether AI possesses understanding, see (Mitchell and Krakauer 2023;Negrotti 2012;Søgaard 2022). 17.…”
Section: Orcid Idmentioning
confidence: 99%
“…We then compare both types of texts, assessing differences in aspects like vocabulary, morphosyntactic structures, and semantic attributes 1 Introduction Large language models (LLMs) (Radford et al, 2018;Scao et al, 2022;Touvron et al, 2023) and instruction-tuned variants (OpenAI, 2023;Taori et al, 2023) output fluent, human-like text in many languages, English being the best represented. The extent to which these models truly understand semantics (Landgrebe and Smith, 2021;Søgaard, 2022), encode representations of the world (Li et al, 2022), generate fake statements (Kumar et al, 2023), or propagate specific moral and ethical values (Santurkar et al, 2023) is currently under active debate. Regardless, a crucial factor contributing to the persuasiveness of these models lies, in the very first place, in their exceptional linguistic fluency.…”
mentioning
confidence: 99%