Who is good at prediction? Addressing this question is key to recruiting and cultivating accurate crowds and effectively aggregating their judgments. Recent research on superforecasting has demonstrated the importance of individual, persistent skill in crowd prediction. This chapter takes stock of skill identification measures in probability estimation tasks, and complements the review with original analyses, comparing such measures directly within the same dataset. We classify all measures in five broad categories: 1) accuracy-related measures, such as proper scores, modelbased estimates of accuracy and excess volatility scores; 2) intersubjective measures, including proxy, surrogate and similarity scores; 3) forecasting behaviors, including activity, belief updating, extremity, coherence, and linguistic properties of rationales; 4) dispositional measures of fluid intelligence, cognitive reflection, numeracy, personality and thinking styles; and 5) measures of expertise, including demonstrated knowledge, confidence calibration, biographical, and self-rated expertise. Among non-accuracy-related measures, we report a median correlation coefficient with outcomes of r = 0.20. In the absence of accuracy data, we find that intersubjective and behavioral measures are most strongly correlated with forecasting accuracy. These results hold in a LASSO machine-learning model with automated variable selection. Two focal applications provide context for these assessments: long-term prediction and corporate forecasting tournaments.