2015
DOI: 10.1093/comnet/cnv010
|View full text |Cite
|
Sign up to set email alerts
|

Clustering function: another view on clustering coefficient

Abstract: Assuming that actors u and v have r common neighbors in a social network we are interested in how likely is that u and v are adjacent. This question is addressed by studying the collection of conditional probabilities, denoted cl(r), r = 0, 1, 2,. .. , that two randomly chosen actors of the social network are adjacent, given that they have r common neighbors. The function r → cl(r) describes clustering properties of the network and extends the global clustering coefficient. Our empirical study shows that the f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
13
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 30 publications
1
13
0
Order By: Relevance
“…The proof (8) is similar to that of (7). It makes use the observation that the typical adjacency relation is witnessed by a single attribute.…”
Section: Proofmentioning
confidence: 83%
See 1 more Smart Citation
“…The proof (8) is similar to that of (7). It makes use the observation that the typical adjacency relation is witnessed by a single attribute.…”
Section: Proofmentioning
confidence: 83%
“…Now, the statistical dependence of neighboring adjacency relations persists as n, m → +∞, and the asymptotic bivariate degree-degree distribution is not a product of marginal asymptotic degree distributions, see (7). One can show that in this case the degrees of adjacent vertices are positively correlated and G admits a non-vanishing positive Newman's assortativity coefficient, provided that the vertex degree distribution has a finite third moment (cf.…”
Section: Introductionmentioning
confidence: 99%
“…We note that the moment conditions EX 2 1 < ∞ and EY 1 < ∞ of Theorem 2 are the minimal ones as the numbers a 2 = EX 2 1 and b 1 = EY 1 enter (implicitly) both formulas (7) and (8).…”
Section: Introductionmentioning
confidence: 93%
“…Proof of (i). The intuition behind formula (7) is that with a high probability the adjacency relation v 1 ∼ v 2 as well as all common neighbors of v 1 and v 2 are witnessed, by the same common attribute (all attributes having equal chances). Furthermore, conditionally on the event that this attribute is w 1 , and given Y 1 , Y 2 , X 1 , we have that the random variables…”
Section: Proofmentioning
confidence: 99%
“…In this case the limiting model can be parameterised by its mean degree λ = βγ 2 and attribute intensity µ = βγ. By extending the model by introducing random node weights, we obtain a statistical network model which is rich enough to admit heavy tails and nontrivial clustering properties [4,5,8,10]. Such models can also be generalised to the directed case [6].…”
Section: Introductionmentioning
confidence: 99%