Historical interpretation benefits from identifying analogies among famous people: Who are the Lincolns, Einsteins, Hitlers, and Mozarts? As a knowledge source that benefits many applications in language processing and knowledge representation, Wikipedia provides the information we need to make such comparisons. We investigate several approaches to convert the Wikipedia pages of approximately 600,000 historical figures into vector representations to quantify similarity. On the other hand, Wikipedia pages are assigned to different categories according to their contents as human-annotated labels. A rough similarity estimation could just counting the number of shared Wikipedia categories. However, such counting can neither make good similarity quantification (i.e. Is there a difference between those with same number of shared categories?) nor make distinguishable comments on different categories (i.e. Is US Presidents more important than state lawyer when defining similarity?). We use the counting as an indicator to demonstrate high-level agreements of our similarity detection algorithms. In particular, we investigate four different unsupervised approaches to representing the semantic associations of individuals: (1) TF-IDF, (2) Weighted average of distributed word embedding, (3) LDA Topic analysis and (4) Deepwalk graph embedding from page links. All proved effective, but the Deepwalk embedding yielded an overall accuracy of 88.23% in our evaluation. Combining