Following up on numerous reports of analogybased identification of "linguistic regularities" in word embeddings, this study applies the widely used vector offset method to 4 types of linguistic relations: inflectional and derivational morphology, and lexicographic and encyclopedic semantics. We present a balanced test set with 99,200 questions in 40 categories, and we systematically examine how accuracy for different categories is affected by window size and dimensionality of the SVD-based word embeddings. We also show that GloVe and SVD yield similar patterns of results for different categories, offering further evidence for conceptual similarity between count-based and neural-net based models.
Autonomous agents must often detect affordances: the set of behaviors enabled by a situation. Af-fordance detection is particularly helpful in domains with large action spaces, allowing the agent to prune its search space by avoiding futile behaviors. This paper presents a method for affor-dance extraction via word embeddings trained on a tagged Wikipedia corpus. The resulting word vectors are treated as a common knowledge database which can be queried using linear algebra. We apply this method to a reinforcement learning agent in a text-only environment and show that affordance-based action selection improves performance in most cases. Our method increases the computational complexity of each learning step but significantly reduces the total number of steps needed. In addition, the agent's action selections begin to resemble those a human would choose.
In this corpus-based study I contribute to the description and analysis of linguistic and cultural variation in the conceptualization of sympathy, compassion, and empathy. On the basis of a contrastive semantic analysis of sympathy, compassion, and empathy in English and their Russian translational equivalents, sočuvstvie, sostradanie, and sopereživanie, I demonstrate significant differences in the conceptualization of these words, which I explain by reference to the prevalence of different models of social interaction in Anglo and Russian cultures, as well as different cultural attitudes towards emotional expression. As a methodology I apply the Natural Semantic Metalanguage (NSM), which is based on empirically established lexical and grammatical universals, and argue that it is a powerful tool in contrastive studies.
This introduction to the Special Issue summarises Anna Wierzbicka's contribution to the linguistic study of meaning. It presents the foundations of the approach known as the Natural Semantic Metalanguage (NSM) developed by Wierzbicka. The current state of the approach is discussed in the article with the ideas of 65 semantic primitives, universal grammar and the principle of reductive paraphrase in semantic explications. It traces the origin of Wierzbicka's ideas to Leibniz. The framework has been tested on about thirty languages of diverse origin. The applications of the approach are broad and encompass lexical areas of emotions, social categories, speech act verbs, mental states, artefacts and animals, verbs of motion, kinship terms (among others), as well as grammatical constructions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.