2020
DOI: 10.1007/s13347-020-00393-9
|View full text |Cite
|
Sign up to set email alerts
|

Why Can Computers Understand Natural Language?

Abstract: The present paper intends to draw the conception of language implied in the technique of word embeddings that supported the recent development of deep neural network models in computational linguistics. After a preliminary presentation of the basic functioning of elementary artificial neural networks, we introduce the motivations and capabilities of word embeddings through one of its pioneering models, word2vec. To assess the remarkable results of the latter, we inspect the nature of its underlying mechanisms,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 61 publications
0
10
0
Order By: Relevance
“…The training algorithms adopted by classical neural networks such as CNN are all BP algorithms. The BP algorithm calculates the error value between the current output of the network and our expected output, and lets the error propagate back to adjust the network weights until the error between the output and our expected value remains stable within a reasonable range, and the training ends ( 20 ). The rate of change of the error is obtained by derivation, as follows:…”
Section: Methods Of Natural Language Processing Technologymentioning
confidence: 99%
“…The training algorithms adopted by classical neural networks such as CNN are all BP algorithms. The BP algorithm calculates the error value between the current output of the network and our expected output, and lets the error propagate back to adjust the network weights until the error between the output and our expected value remains stable within a reasonable range, and the training ends ( 20 ). The rate of change of the error is obtained by derivation, as follows:…”
Section: Methods Of Natural Language Processing Technologymentioning
confidence: 99%
“…However, it is natural language's fundamental capacity for semantic change that, strangely, fails to become the focus point of systems attempting to somehow capture the functionality of its processes. In much of linguistics, semantics and philosophy we find analysis of this issue to be a common target, but, as Juan Luis Gastaldi notes: "epistemological and philosophical reflections are scarce, at best, in the literature of [NLP]" (Gastaldi, 2021).…”
Section: Semantic Noisementioning
confidence: 99%
“…The authors thank Olivia Caramello, Shawn Henry, Maxim Kontsevich, Laurent Lafforgue, Jacob Miller, David Jaz Myers, David Spivak, and Simon Willerton for helpful mathematical discussions. The authors thank Juan Gastaldi and Luc Pellissier for discussions about their philosophical work [11,12] and the anonymous referees who made suggestions that greatly improved this article.…”
Section: Acknowledgementsmentioning
confidence: 99%