Great musicians have a style that is uniquely their own. This is especially true in jazz, where experts are often capable of identifying a particular performer’s improvisation style simply by listening to their recordings. Designing a computational model that can perform the same task is a challenging prospect, as it forces us to quantify the intuitions that these experts have built up over many years, and may be unable to express verbally. To do so, we must think carefully about the areas in which performance styles are likely to differ most substantially. One possibility is that musicians may vary in terms of the rhythmic and temporal qualities of their improvisations. We demonstrate that a supervised learning model trained solely on rhythmic features extracted from 300 source-separated audio recordings of jazz pianists was capable of identifying the performer in 52% of cases, over five times better than chance. The strongest predictors related to a performer’s “feel” (ensemble synchronization) and “swing” (characteristic subdivision of the pulse into long and short intervals). Further analysis revealed the presence of two clusters of pianists, identified as “impressionist” and “blues” improvisation styles, with performers in the same cluster sharing similar levels of rhythmic complexity and synchronization. Our findings demonstrate the importance of rhythm in defining a musician’s unique improvisational style, with interesting implications for pedagogy. They also highlight the possibility that artificial intelligence can be used to perform musical style identification tasks normally reserved for expert listeners, with broad applications to stylometry and authorship attribution.