2020
DOI: 10.1016/j.poetic.2019.101428
|View full text |Cite
|
Sign up to set email alerts
|

Social centralization and semantic collapse: Hyperbolic embeddings of networks and text

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 67 publications
0
9
0
Order By: Relevance
“…where 𝒗 and 𝒖 are the "in-vector" and "out-vector", respectively, 𝑍 𝑖 = 𝑗 ′ ∈A exp(𝒖 𝑗 ′ • 𝒗 𝑖 ) is a normalization constant, and A is the set of all locations. We follow the standard practice and only use the in-vector, 𝒗, which is known to be superior to the out-vector in link prediction benchmarks [20][21][22][23][24][25]28 .…”
Section: Embeddingmentioning
confidence: 99%
See 1 more Smart Citation
“…where 𝒗 and 𝒖 are the "in-vector" and "out-vector", respectively, 𝑍 𝑖 = 𝑗 ′ ∈A exp(𝒖 𝑗 ′ • 𝒗 𝑖 ) is a normalization constant, and A is the set of all locations. We follow the standard practice and only use the in-vector, 𝒗, which is known to be superior to the out-vector in link prediction benchmarks [20][21][22][23][24][25]28 .…”
Section: Embeddingmentioning
confidence: 99%
“…Here, we study the skip-gram negative sampling (SGNS), or word2vec neural-network architecture (see Methods). This neural embedding model, originally designed for learning models of language 19 , has been making breakthroughs by revealing novel insights into texts [20][21][22][23][24][25] , networks [26][27][28] and trajectories [29][30][31][32][33][34] . It works under the notion that a good representation should facilitate prediction, learning a mapping between words that can predict a target word based on its context (surrounding words).…”
Section: Introductionmentioning
confidence: 99%
“…The meaning of a word is a fundamental object of study in computational linguistics, where researchers have in particular focused on the disambiguation and identification of lexical semantic change (e.g., Schlechtweg, Hätty, et al 2019;Tahmasebi, Borin, and Jatowt 2019) and word senses (e.g., Amrami and Goldberg 2019;Başkaya et al 2013;Daille et al 2016). Similarly, changes in meaning structures have been examined in the field of computational sociology (e.g., Kozlowski, Taddy, and Evans 2020;Linzhuo, Lingfei, and James 2020;Padgett et al 2020;Rule, Cointet, and Bearman 2015;Schoots et al 2020), although these latter studies are not based on deep language models and therefore do not take context or polysemy into account.…”
Section: Related Work and Contributionmentioning
confidence: 99%
“…Projecting these relationships into Euclidean spaces may imply a loss of information if semantic ties do not follow the triangle inequality. Similarly, hyperbolic spaces, while appropriate for tree-like syntactic relations, are also not be sufficient to represent semantic relationships, where triads of tokens need not be closed in terms of similarity (Chami et al 2020;Linzhuo, Lingfei, and James 2020;Nickel and Kiela 2018).…”
Section: A1 Challenges In the Application Of Deep Language Modelsmentioning
confidence: 99%
“…A vector-space embedding of locations (airports, accommodations, and organizations) is learned by using trajectories as input to the standard with skip-gram negative sampling, or word2vec neural-network architecture (see Methods). This neural embedding model, originally designed for learning language models [19], has been making breakthroughs by revealing novel insights into texts [20][21][22][23][24][25] and networks [26,27]. The model is also computationally efficient, robust to noise, and can encode relations between entities as geometric relationships in the vector space [22,25,28,29].…”
Section: Introductionmentioning
confidence: 99%