Large language models have become extremely popular over a short period of time given their ability to generate text that resembles human writing across a variety of domains and tasks. This popularity and breadth of use also put this technology at hazard to fundamentally reshape how written language is perceived and evaluated. Spoken language has long played a role in maintaining power and hegemony in society, especially through ideas of social identity and ``correct'' forms of language. But as human communication becomes even more reliant on text and writing, it is important to understand how these processes might shift and who is more likely to see their writing styles reflected back at them through modern AI. We therefore ask the following question: who does generative AI write like? To answer this, we compare writing style features in over 150,000 college admissions essays submitted to a large public university system and an engineering program at an elite private university with a corpus of over 25,000 essays generated with GPT-3.5 and GPT-4 to the same writing prompts. For individual writing style features, we find that humans exhibit more variability than generative AI. When comparing all of the writing features, we find that the AI-generated essays are most similar to essays submitted by students with higher levels of social privilege. Future studies using text data and large language models might consider human and AI authorship characteristics and evaluation of writing.