The present paper intends to draw the conception of language implied in the technique of word embeddings that supported the recent development of deep neural network models in computational linguistics. After a preliminary presentation of the basic functioning of elementary artificial neural networks, we introduce the motivations and capabilities of word embeddings through one of its pioneering models, word2vec. To assess the remarkable results of the latter, we inspect the nature of its underlying mechanisms, which have been characterized as the implicit factorization of a word-context matrix. We then discuss the ordinary association of the "distributional hypothesis" with a "use theory of meaning", often justifying the theoretical basis of word embeddings, and contrast them to the theory of meaning stemming from those mechanisms through the lens of matrix models (such as VSMs and DSMs). Finally, we trace back the principles of their possible consistency through Harris's original distributionalism up to the structuralist conception of language of Saussure and Hjelmslev. Other than giving access to the technical literature and state of the art in the field of Natural Language Processing to non-specialist readers, the paper seeks to reveal the conceptual and philosophical stakes involved in the recent application of new neural network techniques to the computational treatment of language.
Why can computers understand natural language?Il n'y a pas de "philosophie" du langage. Il n'y a que la linguistique.Louis Hjelmslev Principes de Grammaire Générale, 1928 1 I borrow this expression from Maniglier (2016, p. 359), who in turn takes inspiration from Deleuze's notion of "image of thought" (Deleuze, 1994, ch. III).2 See for instance Hale and Wright (1997). 3 See for instance Christopher Manning and Richard Socher's tutorial "Deep Learning for 23 Details of the different kinds of semantic and syntactic analogy relations can be found in Mikolov et al. (2013d); Schnabel et al. (2015).